模仿微视视频录制、支持按下录制抬起暂停以及断点进度条(基于javacv)-程序员宅基地

技术标签: 微视  android  javacv  短视频  

apk下载视频录制apk下载,项目源码地址为https://github.com/qdrzwd/VideoRecorder   

 

现在应该已经有很多人知道Vine和Instagram。前者做短视频分享起家后被Twitter收购, 后者做照片分享起家后被Facebook收购,随后也迅速添加了视频分享功能。   可以说,短视频分享是社交网络适应移动互联网发展的大趋势。短视频分享应用已经如雨后春笋般崛起 (算上最近新浪在推的“秒拍”,国内数得上名字的短视频分享已经有微视、秒拍、微可拍、玩拍,等等)。 目前网上的录制视频的资料都是基于android自带的MediaRecorder类,但是这个类实在是个鸡肋, 在实际项目中基本没什么用处。下面的项目中,录制视频使用javacv自带的api, 录制声音使用的是 android的AudioRecord 用过微视或vine的人都知道,他们的视频录制体验都是非常好的。微视虽然是模仿vine, 但是视频录制这一块我认为不比vine差。自己一直想做一个类似的视频录制app。 反编译微视和vine之后,发现微视使用的是自己编译的ffmpeg库,vine使用的是javacv 库。网上查资料后发现javacv已经提供了视频录制功能,并且包含有视频编辑等图像处理功能。 但是这之前从来没有听说过javacv,在上网查各种资料,纠结了几天之后, 偶然在githup上面发现了这个项目https://github.com/sourab-sharma/TouchToRecord,不得不 说githup找代码就是爽啊。本文以下视频录制功能,主要就是基于这个项目。在这里再次感谢原作者。 视频录制存在的问题 1.如何获取摄像头的数据 2.如何把获取到的数据保存到视频文件中 3.如何录制音频,并和视频合并 4.录制视频时如何实现暂停功能 5.android摄像头支持的分辨率可能不符合需求,需要转换分辨率 6.android手机录制出来的视频是旋转了90度的,如何实现旋转(转换到前置摄像头又如何处理) 先上几张最后的成果图 就单纯这几张图来看已经很像微视了,有木有,实现了断点录制视频、断点进度条、音视频混合、录制完成后分辨率为480*480。 但是还存在很多问题,视频旋转、利用图片合成视频、使用本地视频剪辑、后期视频特效添加等等。 下面开始代码展示了 首先需要javacv和javacpp的jar包,需要javacv需要的so库文件 这些都是必须的库文件,并且都可以在javacv开源项目中找到https://code.google.com/p/javacv/ 直接上代码 首先是FFmpegFrameRecorder类,这个类是javacv包中就有的,已经实现了视频录制和音频录制,我们只是需要做一些修改 代码中已经写了比较详细的注释,就不再详细介绍了。 /* * Copyright (C) 2009,2010,2011,2012,2013 Samuel Audet * * This file is part of JavaCV. * * JavaCV is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 2 of the License, or * (at your option) any later version (subject to the "Classpath" exception * as provided in the LICENSE.txt file that accompanied this code). * * JavaCV is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with JavaCV. If not, see <http://www.gnu.org/licenses/>. * * * Based on the output-example.c file included in FFmpeg 0.6.5 * as well as on the decoding_encoding.c file included in FFmpeg 0.11.1, * which are covered by the following copyright notice: * * Libavformat API example: Output a media file in any supported * libavformat format. The default codecs are used. * * Copyright (c) 2001,2003 Fabrice Bellard * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN * THE SOFTWARE. */ package com.qd.recorder; import com.googlecode.javacpp.BytePointer; import com.googlecode.javacpp.DoublePointer; import com.googlecode.javacpp.FloatPointer; import com.googlecode.javacpp.IntPointer; import com.googlecode.javacpp.Loader; import com.googlecode.javacpp.Pointer; import com.googlecode.javacpp.PointerPointer; import com.googlecode.javacpp.ShortPointer; import com.googlecode.javacv.FrameRecorder; import java.io.File; import java.nio.Buffer; import java.nio.ByteBuffer; import java.nio.ByteOrder; import java.nio.DoubleBuffer; import java.nio.FloatBuffer; import java.nio.IntBuffer; import java.nio.ShortBuffer; import java.util.Map.Entry; import static com.googlecode.javacv.cpp.avcodec.*; import static com.googlecode.javacv.cpp.avformat.*; import static com.googlecode.javacv.cpp.avutil.*; import static com.googlecode.javacv.cpp.opencv_core.*; import static com.googlecode.javacv.cpp.swresample.*; import static com.googlecode.javacv.cpp.swscale.*; /** * * @author Samuel Audet */ public class FFmpegFrameRecorder extends FrameRecorder { public static FFmpegFrameRecorder createDefault(File f, int w, int h) throws Exception { return new FFmpegFrameRecorder(f, w, h); } public static FFmpegFrameRecorder createDefault(String f, int w, int h) throws Exception { return new FFmpegFrameRecorder(f, w, h); } private static Exception loadingException = null; public static void tryLoad() throws Exception { if (loadingException != null) { throw loadingException; } else { try { Loader.load(com.googlecode.javacv.cpp.avutil.class); Loader.load(com.googlecode.javacv.cpp.avcodec.class); Loader.load(com.googlecode.javacv.cpp.avformat.class); Loader.load(com.googlecode.javacv.cpp.swscale.class); } catch (Throwable t) { if (t instanceof Exception) { throw loadingException = (Exception)t; } else { throw loadingException = new Exception("Failed to load " + FFmpegFrameRecorder.class, t); } } } } static { /* initialize libavcodec, and register all codecs and formats */ av_register_all(); avformat_network_init(); } public FFmpegFrameRecorder(File file, int audioChannels) { this(file, 0, 0, audioChannels); } public FFmpegFrameRecorder(String filename, int audioChannels) { this(filename, 0, 0, audioChannels); } public FFmpegFrameRecorder(File file, int imageWidth, int imageHeight) { this(file, imageWidth, imageHeight, 0); } public FFmpegFrameRecorder(String filename, int imageWidth, int imageHeight) { this(filename, imageWidth, imageHeight, 0); } public FFmpegFrameRecorder(File file, int imageWidth, int imageHeight, int audioChannels) { this(file.getAbsolutePath(), imageWidth, imageHeight); } public FFmpegFrameRecorder(String filename, int imageWidth, int imageHeight, int audioChannels) { this.filename = filename; this.imageWidth = imageWidth; this.imageHeight = imageHeight; this.audioChannels = audioChannels; this.pixelFormat = AV_PIX_FMT_NONE; this.videoCodec = AV_CODEC_ID_NONE; this.videoBitrate = 400000; this.frameRate = 30; this.sampleFormat = AV_SAMPLE_FMT_NONE; this.audioCodec = AV_CODEC_ID_NONE; this.audioBitrate = 64000; this.sampleRate = 44100; this.interleaved = true; this.video_pkt = new AVPacket(); this.audio_pkt = new AVPacket(); } public void release() throws Exception { synchronized (com.googlecode.javacv.cpp.avcodec.class) { releaseUnsafe(); } } public void releaseUnsafe() throws Exception { /* close each codec */ if (video_c != null) { avcodec_close(video_c); video_c = null; } if (audio_c != null) { avcodec_close(audio_c); audio_c = null; } if (picture_buf != null) { av_free(picture_buf); picture_buf = null; } if (picture != null) { avcodec_free_frame(picture); picture = null; } if (tmp_picture != null) { avcodec_free_frame(tmp_picture); tmp_picture = null; } if (video_outbuf != null) { av_free(video_outbuf); video_outbuf = null; } if (frame != null) { avcodec_free_frame(frame); frame = null; } if (samples_out != null) { for (int i = 0; i < samples_out.length; i++) { av_free(samples_out[i].position(0)); } samples_out = null; } if (audio_outbuf != null) { av_free(audio_outbuf); audio_outbuf = null; } video_st = null; audio_st = null; if (oc != null && !oc.isNull()) { if ((oformat.flags() & AVFMT_NOFILE) == 0) { /* close the output file */ avio_close(oc.pb()); } /* free the streams */ int nb_streams = oc.nb_streams(); for(int i = 0; i < nb_streams; i++) { av_free(oc.streams(i).codec()); av_free(oc.streams(i)); } /* free the stream */ av_free(oc); oc = null; } if (img_convert_ctx != null) { sws_freeContext(img_convert_ctx); img_convert_ctx = null; } if (samples_convert_ctx != null) { swr_free(samples_convert_ctx); samples_convert_ctx = null; } } @Override protected void finalize() throws Throwable { super.finalize(); release(); } private String filename; private AVFrame picture, tmp_picture; private BytePointer picture_buf; private BytePointer video_outbuf; private int video_outbuf_size; private AVFrame frame; private Pointer[] samples_in; private BytePointer[] samples_out; private PointerPointer samples_in_ptr; private PointerPointer samples_out_ptr; private BytePointer audio_outbuf; private int audio_outbuf_size; private int audio_input_frame_size; private AVOutputFormat oformat; private AVFormatContext oc; private AVCodec video_codec, audio_codec; private AVCodecContext video_c, audio_c; private AVStream video_st, audio_st; private SwsContext img_convert_ctx; private SwrContext samples_convert_ctx; private AVPacket video_pkt, audio_pkt; private int[] got_video_packet, got_audio_packet; @Override public int getFrameNumber() { return picture == null ? super.getFrameNumber() : (int)picture.pts(); } @Override public void setFrameNumber(int frameNumber) { if (picture == null) { super.setFrameNumber(frameNumber); } else { picture.pts(frameNumber); } } // best guess for timestamp in microseconds... @Override public long getTimestamp() { return Math.round(getFrameNumber() * 1000000L / getFrameRate()); } @Override public void setTimestamp(long timestamp) { setFrameNumber((int)Math.round(timestamp * getFrameRate() / 1000000L)); } public void start() throws Exception { synchronized (com.googlecode.javacv.cpp.avcodec.class) { startUnsafe(); } } public void startUnsafe() throws Exception { int ret; picture = null; tmp_picture = null; picture_buf = null; frame = null; video_outbuf = null; audio_outbuf = null; oc = null; video_c = null; audio_c = null; video_st = null; audio_st = null; got_video_packet = new int[1]; got_audio_packet = new int[1]; /* auto detect the output format from the name. */ String format_name = format == null || format.length() == 0 ? null : format; if ((oformat = av_guess_format(format_name, filename, null)) == null) { int proto = filename.indexOf("://"); if (proto > 0) { format_name = filename.substring(0, proto); } if ((oformat = av_guess_format(format_name, filename, null)) == null) { throw new Exception("av_guess_format() error: Could not guess output format for \"" + filename + "\" and " + format + " format."); } } format_name = oformat.name().getString(); /* allocate the output media context */ if ((oc = avformat_alloc_context()) == null) { throw new Exception("avformat_alloc_context() error: Could not allocate format context"); } oc.oformat(oformat); oc.filename().putString(filename); /* add the audio and video streams using the format codecs and initialize the codecs */ if (imageWidth > 0 && imageHeight > 0) { if (videoCodec != AV_CODEC_ID_NONE) { oformat.video_codec(videoCodec); } else if ("flv".equals(format_name)) { oformat.video_codec(AV_CODEC_ID_FLV1); } else if ("mp4".equals(format_name)) { oformat.video_codec(AV_CODEC_ID_MPEG4); } else if ("3gp".equals(format_name)) { oformat.video_codec(AV_CODEC_ID_H263); } else if ("avi".equals(format_name)) { oformat.video_codec(AV_CODEC_ID_HUFFYUV); } /* find the video encoder */ if ((video_codec = avcodec_find_encoder_by_name(videoCodecName)) == null && (video_codec = avcodec_find_encoder(oformat.video_codec())) == null) { release(); throw new Exception("avcodec_find_encoder() error: Video codec not found."); } AVRational frame_rate = av_d2q(frameRate, 1001000); AVRational supported_framerates = video_codec.supported_framerates(); if (supported_framerates != null) { int idx = av_find_nearest_q_idx(frame_rate, supported_framerates); frame_rate = supported_framerates.position(idx); } /* add a video output stream */ if ((video_st = avformat_new_stream(oc, video_codec)) == null) { release(); throw new Exception("avformat_new_stream() error: Could not allocate video stream."); } video_c = video_st.codec(); video_c.codec_id(oformat.video_codec()); video_c.codec_type(AVMEDIA_TYPE_VIDEO); /* put sample parameters */ video_c.bit_rate(videoBitrate); /* resolution must be a multiple of two, but round up to 16 as often required */ video_c.width((imageWidth + 15) / 16 * 16); video_c.height(imageHeight); /* time base: this is the fundamental unit of time (in seconds) in terms of which frame timestamps are represented. for fixed-fps content, timebase should be 1/framerate and timestamp increments should be identically 1. */ video_c.time_base(av_inv_q(frame_rate)); video_c.gop_size(12); /* emit one intra frame every twelve frames at most */ if (videoQuality >= 0) { video_c.flags(video_c.flags() | CODEC_FLAG_QSCALE); video_c.global_quality((int)Math.round(FF_QP2LAMBDA * videoQuality)); } if (pixelFormat != AV_PIX_FMT_NONE) { video_c.pix_fmt(pixelFormat); } else if (video_c.codec_id() == AV_CODEC_ID_RAWVIDEO || video_c.codec_id() == AV_CODEC_ID_PNG || video_c.codec_id() == AV_CODEC_ID_HUFFYUV || video_c.codec_id() == AV_CODEC_ID_FFV1) { video_c.pix_fmt(AV_PIX_FMT_RGB32); // appropriate for common lossless formats } else { video_c.pix_fmt(AV_PIX_FMT_YUV420P); // lossy, but works with about everything } if (video_c.codec_id() == AV_CODEC_ID_MPEG2VIDEO) { /* just for testing, we also add B frames */ video_c.max_b_frames(2); } else if (video_c.codec_id() == AV_CODEC_ID_MPEG1VIDEO) { /* Needed to avoid using macroblocks in which some coeffs overflow. This does not happen with normal video, it just happens here as the motion of the chroma plane does not match the luma plane. */ video_c.mb_decision(2); } else if (video_c.codec_id() == AV_CODEC_ID_H263) { // H.263 does not support any other resolution than the following if (imageWidth <= 128 && imageHeight <= 96) { video_c.width(128).height(96); } else if (imageWidth <= 176 && imageHeight <= 144) { video_c.width(176).height(144); } else if (imageWidth <= 352 && imageHeight <= 288) { video_c.width(352).height(288); } else if (imageWidth <= 704 && imageHeight <= 576) { video_c.width(704).height(576); } else { video_c.width(1408).height(1152); } } else if (video_c.codec_id() == AV_CODEC_ID_H264) { // default to constrained baseline to produce content that plays back on anything, // without any significant tradeoffs for most use cases video_c.profile(AVCodecContext.FF_PROFILE_H264_CONSTRAINED_BASELINE); } // some formats want stream headers to be separate if ((oformat.flags() & AVFMT_GLOBALHEADER) != 0) { video_c.flags(video_c.flags() | CODEC_FLAG_GLOBAL_HEADER); } if ((video_codec.capabilities() & CODEC_CAP_EXPERIMENTAL) != 0) { video_c.strict_std_compliance(AVCodecContext.FF_COMPLIANCE_EXPERIMENTAL); } } /* * add an audio output stream */ if (audioChannels > 0 && audioBitrate > 0 && sampleRate > 0) { if (audioCodec != AV_CODEC_ID_NONE) { oformat.audio_codec(audioCodec); } else if ("flv".equals(format_name) || "mp4".equals(format_name) || "3gp".equals(format_name)) { oformat.audio_codec(AV_CODEC_ID_AAC); } else if ("avi".equals(format_name)) { oformat.audio_codec(AV_CODEC_ID_PCM_S16LE); } /* find the audio encoder */ if ((audio_codec = avcodec_find_encoder_by_name(audioCodecName)) == null && (audio_codec = avcodec_find_encoder(oformat.audio_codec())) == null) { release(); throw new Exception("avcodec_find_encoder() error: Audio codec not found."); } if ((audio_st = avformat_new_stream(oc, audio_codec)) == null) { release(); throw new Exception("avformat_new_stream() error: Could not allocate audio stream."); } audio_c = audio_st.codec(); audio_c.codec_id(oformat.audio_codec()); audio_c.codec_type(AVMEDIA_TYPE_AUDIO); /* put sample parameters */ audio_c.bit_rate(audioBitrate); audio_c.sample_rate(sampleRate); audio_c.channels(audioChannels); audio_c.channel_layout(av_get_default_channel_layout(audioChannels)); if (sampleFormat != AV_SAMPLE_FMT_NONE) { audio_c.sample_fmt(sampleFormat); } else if (audio_c.codec_id() == AV_CODEC_ID_AAC && (audio_codec.capabilities() & CODEC_CAP_EXPERIMENTAL) != 0) { audio_c.sample_fmt(AV_SAMPLE_FMT_FLTP); } else { audio_c.sample_fmt(AV_SAMPLE_FMT_S16); } audio_c.time_base().num(1).den(sampleRate); switch (audio_c.sample_fmt()) { case AV_SAMPLE_FMT_U8: case AV_SAMPLE_FMT_U8P: audio_c.bits_per_raw_sample(8); break; case AV_SAMPLE_FMT_S16: case AV_SAMPLE_FMT_S16P: audio_c.bits_per_raw_sample(16); break; case AV_SAMPLE_FMT_S32: case AV_SAMPLE_FMT_S32P: audio_c.bits_per_raw_sample(32); break; case AV_SAMPLE_FMT_FLT: case AV_SAMPLE_FMT_FLTP: audio_c.bits_per_raw_sample(32); break; case AV_SAMPLE_FMT_DBL: case AV_SAMPLE_FMT_DBLP: audio_c.bits_per_raw_sample(64); break; default: assert false; } if (audioQuality >= 0) { audio_c.flags(audio_c.flags() | CODEC_FLAG_QSCALE); audio_c.global_quality((int)Math.round(FF_QP2LAMBDA * audioQuality)); } // some formats want stream headers to be separate if ((oformat.flags() & AVFMT_GLOBALHEADER) != 0) { audio_c.flags(audio_c.flags() | CODEC_FLAG_GLOBAL_HEADER); } if ((audio_codec.capabilities() & CODEC_CAP_EXPERIMENTAL) != 0) { audio_c.strict_std_compliance(AVCodecContext.FF_COMPLIANCE_EXPERIMENTAL); } } av_dump_format(oc, 0, filename, 1); /* now that all the parameters are set, we can open the audio and video codecs and allocate the necessary encode buffers */ if (video_st != null) { AVDictionary options = new AVDictionary(null); if (videoQuality >= 0) { av_dict_set(options, "crf", "" + videoQuality, 0); } for (Entry<String, String> e : videoOptions.entrySet()) { av_dict_set(options, e.getKey(), e.getValue(), 0); } /* open the codec */ if ((ret = avcodec_open2(video_c, video_codec, options)) < 0) { release(); throw new Exception("avcodec_open2() error " + ret + ": Could not open video codec."); } av_dict_free(options); video_outbuf = null; if ((oformat.flags() & AVFMT_RAWPICTURE) == 0) { /* allocate output buffer */ /* XXX: API change will be done */ /* buffers passed into lav* can be allocated any way you prefer, as long as they're aligned enough for the architecture, and they're freed appropriately (such as using av_free for buffers allocated with av_malloc) */ video_outbuf_size = Math.max(256 * 1024, 8 * video_c.width() * video_c.height()); // a la ffmpeg.c video_outbuf = new BytePointer(av_malloc(video_outbuf_size)); } /* allocate the encoded raw picture */ if ((picture = avcodec_alloc_frame()) == null) { release(); throw new Exception("avcodec_alloc_frame() error: Could not allocate picture."); } picture.pts(0); // magic required by libx264 int size = avpicture_get_size(video_c.pix_fmt(), video_c.width(), video_c.height()); if ((picture_buf = new BytePointer(av_malloc(size))).isNull()) { release(); throw new Exception("av_malloc() error: Could not allocate picture buffer."); } /* if the output format is not equal to the image format, then a temporary picture is needed too. It is then converted to the required output format */ if ((tmp_picture = avcodec_alloc_frame()) == null) { release(); throw new Exception("avcodec_alloc_frame() error: Could not allocate temporary picture."); } } if (audio_st != null) { AVDictionary options = new AVDictionary(null); if (audioQuality >= 0) { av_dict_set(options, "crf", "" + audioQuality, 0); } for (Entry<String, String> e : audioOptions.entrySet()) { av_dict_set(options, e.getKey(), e.getValue(), 0); } /* open the codec */ if ((ret = avcodec_open2(audio_c, audio_codec, options)) < 0) { release(); throw new Exception("avcodec_open2() error " + ret + ": Could not open audio codec."); } av_dict_free(options); audio_outbuf_size = 256 * 1024; audio_outbuf = new BytePointer(av_malloc(audio_outbuf_size)); /* ugly hack for PCM codecs (will be removed ASAP with new PCM support to compute the input frame size in samples */ if (audio_c.frame_size() <= 1) { audio_outbuf_size = FF_MIN_BUFFER_SIZE; audio_input_frame_size = audio_outbuf_size / audio_c.channels(); switch (audio_c.codec_id()) { case AV_CODEC_ID_PCM_S16LE: case AV_CODEC_ID_PCM_S16BE: case AV_CODEC_ID_PCM_U16LE: case AV_CODEC_ID_PCM_U16BE: audio_input_frame_size >>= 1; break; default: break; } } else { audio_input_frame_size = audio_c.frame_size(); } //int bufferSize = audio_input_frame_size * audio_c.bits_per_raw_sample()/8 * audio_c.channels(); int planes = av_sample_fmt_is_planar(audio_c.sample_fmt()) != 0 ? (int)audio_c.channels() : 1; int data_size = av_samples_get_buffer_size((IntPointer)null, audio_c.channels(), audio_input_frame_size, audio_c.sample_fmt(), 1) / planes; samples_out = new BytePointer[planes]; for (int i = 0; i < samples_out.length; i++) { samples_out[i] = new BytePointer(av_malloc(data_size)).capacity(data_size); } samples_in = new Pointer[AVFrame.AV_NUM_DATA_POINTERS]; samples_in_ptr = new PointerPointer(AVFrame.AV_NUM_DATA_POINTERS); samples_out_ptr = new PointerPointer(AVFrame.AV_NUM_DATA_POINTERS); /* allocate the audio frame */ if ((frame = avcodec_alloc_frame()) == null) { release(); throw new Exception("avcodec_alloc_frame() error: Could not allocate audio frame."); } } /* open the output file, if needed */ if ((oformat.flags() & AVFMT_NOFILE) == 0) { AVIOContext pb = new AVIOContext(null); if ((ret = avio_open(pb, filename, AVIO_FLAG_WRITE)) < 0) { release(); throw new Exception("avio_open error() error " + ret + ": Could not open '" + filename + "'"); } oc.pb(pb); } /* write the stream header, if any */ avformat_write_header(oc, (PointerPointer)null); } public void stop() throws Exception { if (oc != null) { try { /* flush all the buffers */ while (video_st != null && record((IplImage)null, AV_PIX_FMT_NONE)); while (audio_st != null && record((AVFrame)null)); if (interleaved && video_st != null && audio_st != null) { av_interleaved_write_frame(oc, null); } else { av_write_frame(oc, null); } /* write the trailer, if any */ av_write_trailer(oc); } finally { release(); } } } public boolean record(IplImage image) throws Exception { return record(image, AV_PIX_FMT_NONE); } public boolean record(IplImage image, int pixelFormat) throws Exception { if (video_st == null) { throw new Exception("No video output stream (Is imageWidth > 0 && imageHeight > 0 and has start() been called?)"); } int ret; if (image == null) { /* no more frame to compress. The codec has a latency of a few frames if using B frames, so we get the last frames by passing the same picture again */ } else { int width = image.width(); int height = image.height(); int step = image.widthStep(); BytePointer data = image.imageData(); if (pixelFormat == AV_PIX_FMT_NONE) { int depth = image.depth(); int channels = image.nChannels(); if ((depth == IPL_DEPTH_8U || depth == IPL_DEPTH_8S) && channels == 3) { pixelFormat = AV_PIX_FMT_BGR24; } else if ((depth == IPL_DEPTH_8U || depth == IPL_DEPTH_8S) && channels == 1) { pixelFormat = AV_PIX_FMT_GRAY8; } else if ((depth == IPL_DEPTH_16U || depth == IPL_DEPTH_16S) && channels == 1) { pixelFormat = ByteOrder.nativeOrder().equals(ByteOrder.BIG_ENDIAN) ? AV_PIX_FMT_GRAY16BE : AV_PIX_FMT_GRAY16LE; } else if ((depth == IPL_DEPTH_8U || depth == IPL_DEPTH_8S) && channels == 4) { pixelFormat = AV_PIX_FMT_RGBA; } else if ((depth == IPL_DEPTH_8U || depth == IPL_DEPTH_8S) && channels == 2) { pixelFormat = AV_PIX_FMT_NV21; // Android's camera capture format step = width; } else { throw new Exception("Could not guess pixel format of image: depth=" + depth + ", channels=" + channels); } } if (video_c.pix_fmt() != pixelFormat || video_c.width() != width || video_c.height() != height) { /* convert to the codec pixel format if needed */ img_convert_ctx = sws_getCachedContext(img_convert_ctx, width, height, pixelFormat, video_c.width(), video_c.height(), video_c.pix_fmt(), SWS_BILINEAR, null, null, (DoublePointer)null); if (img_convert_ctx == null) { throw new Exception("sws_getCachedContext() error: Cannot initialize the conversion context."); } avpicture_fill(new AVPicture(tmp_picture), data, pixelFormat, width, height); avpicture_fill(new AVPicture(picture), picture_buf, video_c.pix_fmt(), video_c.width(), video_c.height()); tmp_picture.linesize(0, step); sws_scale(img_convert_ctx, new PointerPointer(tmp_picture), tmp_picture.linesize(), 0, height, new PointerPointer(picture), picture.linesize()); } else { avpicture_fill(new AVPicture(picture), data, pixelFormat, width, height); picture.linesize(0, step); } } if ((oformat.flags() & AVFMT_RAWPICTURE) != 0) { if (image == null) { return false; } /* raw video case. The API may change slightly in the future for that? */ av_init_packet(video_pkt); video_pkt.flags(video_pkt.flags() | AV_PKT_FLAG_KEY); video_pkt.stream_index(video_st.index()); video_pkt.data(new BytePointer(picture)); video_pkt.size(Loader.sizeof(AVPicture.class)); } else { /* encode the image */ av_init_packet(video_pkt); video_pkt.data(video_outbuf); video_pkt.size(video_outbuf_size); picture.quality(video_c.global_quality()); if ((ret = avcodec_encode_video2(video_c, video_pkt, image == null ? null : picture, got_video_packet)) < 0) { throw new Exception("avcodec_encode_video2() error " + ret + ": Could not encode video packet."); } picture.pts(picture.pts() + 1); // magic required by libx264 /* if zero size, it means the image was buffered */ if (got_video_packet[0] != 0) { if (video_pkt.pts() != AV_NOPTS_VALUE) { video_pkt.pts(av_rescale_q(video_pkt.pts(), video_c.time_base(), video_st.time_base())); } if (video_pkt.dts() != AV_NOPTS_VALUE) { video_pkt.dts(av_rescale_q(video_pkt.dts(), video_c.time_base(), video_st.time_base())); } video_pkt.stream_index(video_st.index()); } else { return false; } } synchronized (oc) { /* write the compressed frame in the media file */ if (interleaved && audio_st != null) { if ((ret = av_interleaved_write_frame(oc, video_pkt)) < 0) { throw new Exception("av_interleaved_write_frame() error " + ret + " while writing interleaved video frame."); } } else { if ((ret = av_write_frame(oc, video_pkt)) < 0) { throw new Exception("av_write_frame() error " + ret + " while writing video frame."); } } } return picture.key_frame() != 0; } @Override public boolean record(int sampleRate, Buffer ... samples) throws Exception { if (audio_st == null) { throw new Exception("No audio output stream (Is audioChannels > 0 and has start() been called?)"); } int ret; int inputSize = samples[0].limit() - samples[0].position(); int inputFormat = AV_SAMPLE_FMT_NONE; int inputChannels = samples.length > 1 ? 1 : audioChannels; int inputDepth = 0; int outputFormat = audio_c.sample_fmt(); int outputChannels = samples_out.length > 1 ? 1 : audioChannels; int outputDepth = av_get_bytes_per_sample(outputFormat); if (sampleRate <= 0) { sampleRate = audio_c.sample_rate(); } if (samples[0] instanceof ByteBuffer) { inputFormat = samples.length > 1 ? AV_SAMPLE_FMT_U8P : AV_SAMPLE_FMT_U8; inputDepth = 1; for (int i = 0; i < samples.length; i++) { ByteBuffer b = (ByteBuffer)samples[i]; if (samples_in[i] instanceof BytePointer && samples_in[i].capacity() >= inputSize && b.hasArray()) { ((BytePointer)samples_in[i]).position(0).put(b.array(), b.position(), inputSize); } else { samples_in[i] = new BytePointer(b); } } } else if (samples[0] instanceof ShortBuffer) { inputFormat = samples.length > 1 ? AV_SAMPLE_FMT_S16P : AV_SAMPLE_FMT_S16; inputDepth = 2; for (int i = 0; i < samples.length; i++) { ShortBuffer b = (ShortBuffer)samples[i]; if (samples_in[i] instanceof ShortPointer && samples_in[i].capacity() >= inputSize && b.hasArray()) { ((ShortPointer)samples_in[i]).position(0).put(b.array(), samples[i].position(), inputSize); } else { samples_in[i] = new ShortPointer(b); } } } else if (samples[0] instanceof IntBuffer) { inputFormat = samples.length > 1 ? AV_SAMPLE_FMT_S32P : AV_SAMPLE_FMT_S32; inputDepth = 4; for (int i = 0; i < samples.length; i++) { IntBuffer b = (IntBuffer)samples[i]; if (samples_in[i] instanceof IntPointer && samples_in[i].capacity() >= inputSize && b.hasArray()) { ((IntPointer)samples_in[i]).position(0).put(b.array(), samples[i].position(), inputSize); } else { samples_in[i] = new IntPointer(b); } } } else if (samples[0] instanceof FloatBuffer) { inputFormat = samples.length > 1 ? AV_SAMPLE_FMT_FLTP : AV_SAMPLE_FMT_FLT; inputDepth = 4; for (int i = 0; i < samples.length; i++) { FloatBuffer b = (FloatBuffer)samples[i]; if (samples_in[i] instanceof FloatPointer && samples_in[i].capacity() >= inputSize && b.hasArray()) { ((FloatPointer)samples_in[i]).position(0).put(b.array(), b.position(), inputSize); } else { samples_in[i] = new FloatPointer(b); } } } else if (samples[0] instanceof DoubleBuffer) { inputFormat = samples.length > 1 ? AV_SAMPLE_FMT_DBLP : AV_SAMPLE_FMT_DBL; inputDepth = 8; for (int i = 0; i < samples.length; i++) { DoubleBuffer b = (DoubleBuffer)samples[i]; if (samples_in[i] instanceof DoublePointer && samples_in[i].capacity() >= inputSize && b.hasArray()) { ((DoublePointer)samples_in[i]).position(0).put(b.array(), b.position(), inputSize); } else { samples_in[i] = new DoublePointer(b); } } } else { throw new Exception("Audio samples Buffer has unsupported type: " + samples); } if (samples_convert_ctx == null) { samples_convert_ctx = swr_alloc_set_opts(null, audio_c.channel_layout(), outputFormat, audio_c.sample_rate(), audio_c.channel_layout(), inputFormat, sampleRate, 0, null); if (samples_convert_ctx == null) { throw new Exception("swr_alloc_set_opts() error: Cannot allocate the conversion context."); } else if ((ret = swr_init(samples_convert_ctx)) < 0) { throw new Exception("swr_init() error " + ret + ": Cannot initialize the conversion context."); } } for (int i = 0; i < samples.length; i++) { samples_in[i].position(samples_in[i].position() * inputDepth). limit((samples_in[i].position() + inputSize) * inputDepth); } while (true) { int inputCount = (samples_in[0].limit() - samples_in[0].position()) / (inputChannels * inputDepth); int outputCount = (samples_out[0].limit() - samples_out[0].position()) / (outputChannels * outputDepth); inputCount = Math.min(inputCount, 2 * (outputCount * sampleRate) / audio_c.sample_rate()); for (int i = 0; i < samples.length; i++) { samples_in_ptr.put(i, samples_in[i]); } for (int i = 0; i < samples_out.length; i++) { samples_out_ptr.put(i, samples_out[i]); } if ((ret = swr_convert(samples_convert_ctx, samples_out_ptr, outputCount, samples_in_ptr, inputCount)) < 0) { throw new Exception("swr_convert() error " + ret + ": Cannot convert audio samples."); } else if (ret == 0) { break; } for (int i = 0; i < samples.length; i++) { samples_in[i].position(samples_in[i].position() + inputCount * inputChannels * inputDepth); } for (int i = 0; i < samples_out.length; i++) { samples_out[i].position(samples_out[i].position() + ret * outputChannels * outputDepth); } if (samples_out[0].position() >= samples_out[0].limit()) { frame.nb_samples(audio_input_frame_size); avcodec_fill_audio_frame(frame, audio_c.channels(), outputFormat, samples_out[0], samples_out[0].limit(), 0); for (int i = 0; i < samples_out.length; i++) { frame.data(i, samples_out[i].position(0)); frame.linesize(i, samples_out[i].limit()); } frame.quality(audio_c.global_quality()); record(frame); } } return frame.key_frame() != 0; } boolean record(AVFrame frame) throws Exception { int ret; av_init_packet(audio_pkt); audio_pkt.data(audio_outbuf); audio_pkt.size(audio_outbuf_size); if ((ret = avcodec_encode_audio2(audio_c, audio_pkt, frame, got_audio_packet)) < 0) { throw new Exception("avcodec_encode_audio2() error " + ret + ": Could not encode audio packet."); } if (got_audio_packet[0] != 0) { if (audio_pkt.pts() != AV_NOPTS_VALUE) { audio_pkt.pts(av_rescale_q(audio_pkt.pts(), audio_c.time_base(), audio_c.time_base())); } if (audio_pkt.dts() != AV_NOPTS_VALUE) { audio_pkt.dts(av_rescale_q(audio_pkt.dts(), audio_c.time_base(), audio_c.time_base())); } audio_pkt.flags(audio_pkt.flags() | AV_PKT_FLAG_KEY); audio_pkt.stream_index(audio_st.index()); } else { return false; } /* write the compressed frame in the media file */ synchronized (oc) { if (interleaved && video_st != null) { if ((ret = av_interleaved_write_frame(oc, audio_pkt)) < 0) { throw new Exception("av_interleaved_write_frame() error " + ret + " while writing interleaved audio frame."); } } else { if ((ret = av_write_frame(oc, audio_pkt)) < 0) { throw new Exception("av_write_frame() error " + ret + " while writing audio frame."); } } } return true; } } 录制参数类 package com.qd.recorder; import android.os.Build; import com.googlecode.javacv.cpp.avcodec; public class RecorderParameters { private static boolean AAC_SUPPORTED = Build.VERSION.SDK_INT >= 10; //private int videoCodec = avcodec.AV_CODEC_ID_H264; private int videoCodec = avcodec.AV_CODEC_ID_MPEG4; private int videoFrameRate = 15; //private int videoBitrate = 500 *1000; private int videoQuality = 12; private int audioCodec = AAC_SUPPORTED ? avcodec.AV_CODEC_ID_AAC : avcodec.AV_CODEC_ID_AMR_NB; private int audioChannel = 1; private int audioBitrate = 96000;//192000;//AAC_SUPPORTED ? 96000 : 12200; private int videoBitrate = 1000000; private int audioSamplingRate = AAC_SUPPORTED ? 44100 : 8000; private String videoOutputFormat = AAC_SUPPORTED ? "mp4" : "3gp"; public static boolean isAAC_SUPPORTED() { return AAC_SUPPORTED; } public static void setAAC_SUPPORTED(boolean aAC_SUPPORTED) { AAC_SUPPORTED = aAC_SUPPORTED; } public String getVideoOutputFormat() { return videoOutputFormat; } public void setVideoOutputFormat(String videoOutputFormat) { this.videoOutputFormat = videoOutputFormat; } public int getAudioSamplingRate() { return audioSamplingRate; } public void setAudioSamplingRate(int audioSamplingRate) { this.audioSamplingRate = audioSamplingRate; } public int getVideoCodec() { return videoCodec; } public void setVideoCodec(int videoCodec) { this.videoCodec = videoCodec; } public int getVideoFrameRate() { return videoFrameRate; } public void setVideoFrameRate(int videoFrameRate) { this.videoFrameRate = videoFrameRate; } public int getVideoQuality() { return videoQuality; } public void setVideoQuality(int videoQuality) { this.videoQuality = videoQuality; } public int getAudioCodec() { return audioCodec; } public void setAudioCodec(int audioCodec) { this.audioCodec = audioCodec; } public int getAudioChannel() { return audioChannel; } public void setAudioChannel(int audioChannel) { this.audioChannel = audioChannel; } public int getAudioBitrate() { return audioBitrate; } public void setAudioBitrate(int audioBitrate) { this.audioBitrate = audioBitrate; } public int getVideoBitrate() { return videoBitrate; } public void setVideoBitrate(int videoBitrate) { this.videoBitrate = videoBitrate; } } 视频录制时每一帧的数据 package com.qd.recorder; import android.os.Parcel; import android.os.Parcelable; public class SavedFrames implements Parcelable{ byte[] frameBytesData = null; long timeStamp = 0L; String cachePath = null; int frameSize = 0; public byte[] getFrameBytesData() { return frameBytesData; } public void setFrameBytesData(byte[] frameBytesData) { this.frameBytesData = frameBytesData; } public long getTimeStamp() { return timeStamp; } public void setTimeStamp(long timeStamp) { this.timeStamp = timeStamp; } public SavedFrames(byte[] frameBytesData, long timeStamp) { this.frameBytesData = frameBytesData; this.timeStamp = timeStamp; } public String getCachePath() { return cachePath; } public void setCachePath(String cachePath) { this.cachePath = cachePath; } public int getframeSize() { return frameSize; } public void setframeSize(int frameSize) { this.frameSize = frameSize; } public SavedFrames(Parcel in) { readFromParcel(in); } public SavedFrames() { frameSize = 0; frameBytesData = new byte[0]; timeStamp = 0L; cachePath = null; } public static final Creator<SavedFrames> CREATOR = new Creator<SavedFrames>() { @Override public SavedFrames createFromParcel(Parcel paramParcel) { SavedFrames savedFrame = new SavedFrames(); savedFrame.readFromParcel(paramParcel); return savedFrame; } @Override public SavedFrames[] newArray(int paramInt) { return new SavedFrames[paramInt]; } }; @Override public int describeContents() { return 0; } @Override public void writeToParcel(Parcel out, int arg1) { out.writeLong(timeStamp); out.writeInt(frameSize); out.writeByteArray(frameBytesData); out.writeString(cachePath); } private void readFromParcel(Parcel in) { timeStamp = in.readLong(); frameSize = in.readInt(); frameBytesData = new byte[frameSize]; in.readByteArray(frameBytesData); cachePath = in.readString(); } } 下面是录制视频的最重要的类了,有些功能还没有实现 package com.qd.recorder; import static com.googlecode.javacv.cpp.opencv_core.IPL_DEPTH_8U; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.nio.Buffer; import java.nio.ShortBuffer; import java.util.Collections; import java.util.List; import android.app.Activity; import android.app.Dialog; import android.content.Context; import android.content.Intent; import android.content.pm.PackageManager; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Color; import android.graphics.Matrix; import android.graphics.Rect; import android.graphics.YuvImage; import android.hardware.Camera; import android.hardware.Camera.CameraInfo; import android.hardware.Camera.Parameters; import android.hardware.Camera.PreviewCallback; import android.hardware.Camera.Size; import android.media.AudioFormat; import android.media.AudioRecord; import android.media.MediaRecorder; import android.net.Uri; import android.os.AsyncTask; import android.os.Build; import android.os.Bundle; import android.os.Environment; import android.os.Handler; import android.os.Message; import android.os.PowerManager; import android.provider.MediaStore.Video; import android.util.DisplayMetrics; import android.view.Gravity; import android.view.MotionEvent; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.View; import android.view.View.OnClickListener; import android.view.View.OnTouchListener; import android.view.ViewGroup.LayoutParams; import android.view.Window; import android.view.WindowManager; import android.widget.Button; import android.widget.ImageView; import android.widget.RelativeLayout; import android.widget.TextView; import com.googlecode.javacv.FrameRecorder; import com.googlecode.javacv.cpp.opencv_core.IplImage; import com.qd.recorder.ProgressView.State; import com.qd.videorecorder.R; public class FFmpegRecorderActivity extends Activity implements OnClickListener, OnTouchListener { private final static String CLASS_LABEL = "RecordActivity"; private final static String LOG_TAG = CLASS_LABEL; private PowerManager.WakeLock mWakeLock; //视频文件的存放地址 private String strVideoPath = Environment.getExternalStorageDirectory().getAbsolutePath(); //视频文件对象 private File fileVideoPath = null; //视频文件在系统中存放的url private Uri uriVideoPath = null; //判断是否需要录制,点击下一步时暂停录制 private boolean rec = false; //判断是否需要录制,手指按下继续,抬起时暂停 boolean recording = false; //判断是否开始了录制,第一次按下屏幕时设置为true boolean isRecordingStarted = false; //是否开启闪光灯 boolean isFlashOn = false; TextView txtTimer, txtRecordingSize; //分别为闪光灯按钮、取消按钮、下一步按钮、转置摄像头按钮 Button flashIcon = null,cancelBtn,nextBtn,switchCameraIcon = null; boolean nextEnabled = false; //录制视频和保存音频的类 private volatile FFmpegFrameRecorder videoRecorder; //判断是否是前置摄像头 private boolean isPreviewOn = false; //当前录制的质量,会影响视频清晰度和文件大小 private int currentResolution = CONSTANTS.RESOLUTION_MEDIUM_VALUE; private Camera mCamera; //预览的宽高和屏幕宽高 private int previewWidth = 480, screenWidth = 480; private int previewHeight = 480, screenHeight = 800; //音频的采样率,recorderParameters中会有默认值 private int sampleRate = 44100; //调用系统的录制音频类 private AudioRecord audioRecord; //录制音频的线程 private AudioRecordRunnable audioRecordRunnable; private Thread audioThread; //开启和停止录制音频的标记 volatile boolean runAudioThread = true; //摄像头以及它的参数 private Camera cameraDevice; private CameraView cameraView; Parameters cameraParameters = null; //IplImage对象,用于存储摄像头返回的byte[],以及图片的宽高,depth,channel等 private IplImage yuvIplImage = null; //分别为 默认摄像头(后置)、默认调用摄像头的分辨率、被选择的摄像头(前置或者后置) int defaultCameraId = -1, defaultScreenResolution = -1 , cameraSelection = 0; Handler handler = new Handler(); private Runnable mUpdateTimeTask = new Runnable() { public void run() { if(rec) setTotalVideoTime(); handler.postDelayed(this, 200); } }; private Dialog dialog = null; //包含显示摄像头数据的surfaceView RelativeLayout topLayout = null; //第一次按下屏幕时记录的时间 long firstTime = 0; //手指抬起是的时间 long startPauseTime = 0; //每次按下手指和抬起之间的暂停时间 long totalPauseTime = 0; //手指抬起是的时间 long pausedTime = 0; //总的暂停时间 long stopPauseTime = 0; //录制的有效总时间 long totalTime = 0; //视频帧率 private int frameRate = 15; //录制的最长时间 private int recordingTime = 8000; //录制的最短时间 private int recordingMinimumTime = 6000; //提示换个场景 private int recordingChangeTime = 3000; boolean recordFinish = false; private Dialog creatingProgress; //音频时间戳 private volatile long mAudioTimestamp = 0L; //一下两个只做同步标志,没有实际意义 private final int[] mVideoRecordLock = new int[0]; private final int[] mAudioRecordLock = new int[0]; private long mLastAudioTimestamp = 0L; private volatile long mAudioTimeRecorded; private long frameTime = 0L; //每一幀的数据结构 private SavedFrames lastSavedframe = new SavedFrames(null,0L); //视频时间戳 private long mVideoTimestamp = 0L; //时候保存过视频文件 private boolean isRecordingSaved = false; private boolean isFinalizing = false; //进度条 private ProgressView progressView; //捕获的第一幀的图片 private String imagePath = null; private RecorderState currentRecorderState = RecorderState.PRESS; private ImageView stateImageView; private Handler mHandler; private void initHandler(){ mHandler = new Handler(){ @Override public void dispatchMessage(Message msg) { switch (msg.what) { /*case 1: final byte[] data = (byte[]) msg.obj; ThreadPoolUtils.execute(new Runnable() { @Override public void run() { getFirstCapture(data); } }); break;*/ case 2: int resId = 0; if(currentRecorderState == RecorderState.PRESS){ resId = R.drawable.video_text01; }else if(currentRecorderState == RecorderState.LOOSEN){ resId = R.drawable.video_text02; }else if(currentRecorderState == RecorderState.CHANGE){ resId = R.drawable.video_text03; }else if(currentRecorderState == RecorderState.SUCCESS){ resId = R.drawable.video_text04; } stateImageView.setImageResource(resId); break; case 3: if(!recording) initiateRecording(true); else{ //更新暂停的时间 stopPauseTime = System.currentTimeMillis(); totalPauseTime = stopPauseTime - startPauseTime - ((long) (1.0/(double)frameRate)*1000); pausedTime += totalPauseTime; } rec = true; //开始进度条增长 progressView.setCurrentState(State.START); setTotalVideoTime(); break; case 4: //设置进度条暂停状态 progressView.setCurrentState(State.PAUSE); //将暂停的时间戳添加到进度条的队列中 progressView.putProgressList((int) totalTime); rec = false; startPauseTime = System.currentTimeMillis(); if(totalTime >= recordingMinimumTime){ currentRecorderState = RecorderState.SUCCESS; mHandler.sendEmptyMessage(2); }else if(totalTime >= recordingChangeTime){ currentRecorderState = RecorderState.CHANGE; mHandler.sendEmptyMessage(2); } break; default: break; } } }; } //neon库对opencv做了优化 static { System.loadLibrary("checkneon"); } public native static int checkNeonFromJNI(); @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_recorder); PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE); mWakeLock = pm.newWakeLock(PowerManager.SCREEN_BRIGHT_WAKE_LOCK, CLASS_LABEL); mWakeLock.acquire(); DisplayMetrics displaymetrics = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(displaymetrics); screenWidth = displaymetrics.widthPixels; screenHeight = displaymetrics.heightPixels; initHandler(); initLayout(); initVideoRecorder(); startRecording(); } @Override protected void onResume() { super.onResume(); mHandler.sendEmptyMessage(2); if (mWakeLock == null) { //获取唤醒锁,保持屏幕常亮 PowerManager pm = (PowerManager) getSystemService(Context.POWER_SERVICE); mWakeLock = pm.newWakeLock(PowerManager.SCREEN_BRIGHT_WAKE_LOCK, CLASS_LABEL); mWakeLock.acquire(); } } @Override protected void onPause() { super.onPause(); if(!isFinalizing) finish(); if (mWakeLock != null) { mWakeLock.release(); mWakeLock = null; } } @Override protected void onStop() { super.onStop(); } @Override protected void onDestroy() { super.onDestroy(); recording = false; if (cameraView != null) { cameraView.stopPreview(); if(cameraDevice != null) cameraDevice.release(); cameraDevice = null; } if (mWakeLock != null) { mWakeLock.release(); mWakeLock = null; } } private void initLayout() { stateImageView = (ImageView) findViewById(R.id.recorder_surface_state); progressView = (ProgressView) findViewById(R.id.recorder_progress); cancelBtn = (Button) findViewById(R.id.recorder_cancel); cancelBtn.setOnClickListener(this); nextBtn = (Button) findViewById(R.id.recorder_next); nextBtn.setOnClickListener(this); txtTimer = (TextView)findViewById(R.id.txtTimer); flashIcon = (Button)findViewById(R.id.recorder_flashlight); switchCameraIcon = (Button)findViewById(R.id.recorder_frontcamera); if (getPackageManager().hasSystemFeature(PackageManager.FEATURE_CAMERA_FLASH)) { flashIcon.setOnClickListener(this); flashIcon.setVisibility(View.VISIBLE); } if (getPackageManager().hasSystemFeature(PackageManager.FEATURE_CAMERA_FRONT)) { switchCameraIcon.setOnClickListener(this); switchCameraIcon.setVisibility(View.VISIBLE); } initCameraLayout(); } private void initCameraLayout() { topLayout = (RelativeLayout) findViewById(R.id.recorder_surface_parent); if(topLayout != null && topLayout.getChildCount() > 0) topLayout.removeAllViews(); setCamera(); handleSurfaceChanged(); //设置surface的宽高 RelativeLayout.LayoutParams layoutParam1 = new RelativeLayout.LayoutParams(screenWidth,(int) (screenWidth*(previewWidth/(previewHeight*1f)))); layoutParam1.addRule(RelativeLayout.ALIGN_PARENT_TOP, RelativeLayout.TRUE); //int margin = Util.calculateMargin(previewWidth, screenWidth); //layoutParam1.setMargins(0,margin,0,margin); RelativeLayout.LayoutParams layoutParam2 = new RelativeLayout.LayoutParams(LayoutParams.MATCH_PARENT,LayoutParams.MATCH_PARENT); layoutParam2.topMargin = screenWidth; View view = new View(this); view.setFocusable(false); view.setBackgroundColor(Color.BLACK); view.setFocusableInTouchMode(false); topLayout.addView(cameraView, layoutParam1); topLayout.addView(view,layoutParam2); topLayout.setOnTouchListener(this); } private void setCamera() { try { if(Build.VERSION.SDK_INT > Build.VERSION_CODES.FROYO) { int numberOfCameras = Camera.getNumberOfCameras(); CameraInfo cameraInfo = new CameraInfo(); for (int i = 0; i < numberOfCameras; i++) { Camera.getCameraInfo(i, cameraInfo); if (cameraInfo.facing == cameraSelection) { defaultCameraId = i; } } } stopPreview(); if(mCamera != null) mCamera.release(); if(defaultCameraId >= 0) cameraDevice = Camera.open(defaultCameraId); else cameraDevice = Camera.open(); if(cameraDevice == null){ //FuncCore.showToast(this, "暂时无法连接到相机"); finish(); } cameraView = new CameraView(this, cameraDevice); } catch(Exception e) { finish(); } } private void initVideoRecorder() { strVideoPath = Util.createFinalPath();//Util.createFinalPath(this); RecorderParameters recorderParameters = Util.getRecorderParameter(currentResolution); sampleRate = recorderParameters.getAudioSamplingRate(); frameRate = recorderParameters.getVideoFrameRate(); frameTime = (1000000L / frameRate); fileVideoPath = new File(strVideoPath); videoRecorder = new FFmpegFrameRecorder(strVideoPath, 480, 480, 1); videoRecorder.setFormat(recorderParameters.getVideoOutputFormat()); videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate()); videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate()); videoRecorder.setVideoCodec(recorderParameters.getVideoCodec()); videoRecorder.setVideoQuality(recorderParameters.getVideoQuality()); videoRecorder.setAudioQuality(recorderParameters.getVideoQuality()); videoRecorder.setAudioCodec(recorderParameters.getAudioCodec()); videoRecorder.setVideoBitrate(recorderParameters.getVideoBitrate()); videoRecorder.setAudioBitrate(recorderParameters.getAudioBitrate()); audioRecordRunnable = new AudioRecordRunnable(); audioThread = new Thread(audioRecordRunnable); } public void startRecording() { try { videoRecorder.start(); audioThread.start(); } catch (FFmpegFrameRecorder.Exception e) { e.printStackTrace(); } } /** * 停止录制 * @author QD * */ public class AsyncStopRecording extends AsyncTask<Void,Void,Void>{ @Override protected void onPreExecute() { isFinalizing = true; recordFinish = true; runAudioThread = false; //创建处理进度条 creatingProgress= new Dialog(FFmpegRecorderActivity.this,R.style.Dialog_loading_noDim); Window dialogWindow = creatingProgress.getWindow(); WindowManager.LayoutParams lp = dialogWindow.getAttributes(); lp.width = getResources().getDisplayMetrics().densityDpi*240; lp.height = getResources().getDisplayMetrics().densityDpi*80; lp.gravity = Gravity.CENTER; dialogWindow.setAttributes(lp); creatingProgress.setCanceledOnTouchOutside(false); creatingProgress.setContentView(R.layout.activity_recorder_progress); creatingProgress.show(); txtTimer.setVisibility(View.INVISIBLE); handler.removeCallbacks(mUpdateTimeTask); super.onPreExecute(); } @Override protected Void doInBackground(Void... params) { isFinalizing = false; if (videoRecorder != null && recording) { recording = false; releaseResources(); } return null; } @Override protected void onPostExecute(Void result) { creatingProgress.dismiss(); registerVideo(); returnToCaller(true); videoRecorder = null; } } /** * 放弃视频时弹出框 */ private void showCancellDialog(){ Util.showDialog(FFmpegRecorderActivity.this, "提示", "确定要放弃本视频吗?", 2, new Handler(){ @Override public void dispatchMessage(Message msg) { if(msg.what == 1) videoTheEnd(false); } }); } @Override public void onBackPressed() { if (recording) showCancellDialog(); else videoTheEnd(false); } /** * 录制音频的线程 * @author QD * */ class AudioRecordRunnable implements Runnable { int bufferSize; short[] audioData; int bufferReadResult; private final AudioRecord audioRecord; public volatile boolean isInitialized; private int mCount =0; private AudioRecordRunnable() { bufferSize = AudioRecord.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT); audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT,bufferSize); audioData = new short[bufferSize]; } /** * shortBuffer包含了音频的数据和起始位置 * @param shortBuffer */ private void record(ShortBuffer shortBuffer) { try { synchronized (mAudioRecordLock) { if (videoRecorder != null) { this.mCount += shortBuffer.limit(); videoRecorder.record(new Buffer[] {shortBuffer}); } return; } } catch (FrameRecorder.Exception localException){} } /** * 更新音频的时间戳 */ private void updateTimestamp() { if (videoRecorder != null) { int i = Util.getTimeStampInNsFromSampleCounted(this.mCount); if (mAudioTimestamp != i) { mAudioTimestamp = i; mAudioTimeRecorded = System.nanoTime(); } } } public void run() { android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO); this.isInitialized = false; if(audioRecord != null) { //判断音频录制是否被初始化 while (this.audioRecord.getState() == 0) { try { Thread.sleep(100L); } catch (InterruptedException localInterruptedException) { } } this.isInitialized = true; this.audioRecord.startRecording(); while (((runAudioThread) || (mVideoTimestamp > mAudioTimestamp)) && (mAudioTimestamp < (1000 * recordingTime))) { updateTimestamp(); bufferReadResult = this.audioRecord.read(audioData, 0, audioData.length); if ((bufferReadResult > 0) && ((recording && rec) || (mVideoTimestamp > mAudioTimestamp))) record(ShortBuffer.wrap(audioData, 0, bufferReadResult)); } this.audioRecord.stop(); this.audioRecord.release(); } } } //获取第一幀的图片 private boolean isFirstFrame = true; private String captureBitmapPath = CONSTANTS.CAMERA_FOLDER_PATH; private void getFirstCapture(byte[] data){ //captureBitmapPath = Util.createImagePath(this); YuvImage localYuvImage = new YuvImage(data, 17,previewWidth, previewHeight, null); ByteArrayOutputStream bos = new ByteArrayOutputStream(); FileOutputStream outStream = null; try { File file = new File(captureBitmapPath); if(!file.exists()) file.createNewFile(); localYuvImage.compressToJpeg(new Rect(0, 0, previewWidth, previewHeight),100, bos); Bitmap localBitmap1 = BitmapFactory.decodeByteArray(bos.toByteArray(), 0,bos.toByteArray().length); bos.close(); Matrix localMatrix = new Matrix(); if (cameraSelection == 0) localMatrix.setRotate(90.0F); else localMatrix.setRotate(270.0F); Bitmap localBitmap2 = Bitmap.createBitmap(localBitmap1, 0, 0, localBitmap1.getHeight(), localBitmap1.getHeight(), localMatrix, true); ByteArrayOutputStream bos2 = new ByteArrayOutputStream(); localBitmap2.compress(Bitmap.CompressFormat.JPEG, 100, bos2); outStream = new FileOutputStream(captureBitmapPath); outStream.write(bos2.toByteArray()); outStream.close(); localBitmap1.recycle(); localBitmap2.recycle(); isFirstFrame = false; imagePath = captureBitmapPath; } catch (FileNotFoundException e) { isFirstFrame = true; e.printStackTrace(); } catch (IOException e) { isFirstFrame = true; e.printStackTrace(); } } /** * 显示摄像头的内容,以及返回摄像头的每一帧数据 * @author QD * */ class CameraView extends SurfaceView implements SurfaceHolder.Callback, PreviewCallback { private SurfaceHolder mHolder; public CameraView(Context context, Camera camera) { super(context); mCamera = camera; cameraParameters = mCamera.getParameters(); mHolder = getHolder(); mHolder.addCallback(CameraView.this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); mCamera.setPreviewCallback(CameraView.this); } @Override public void surfaceCreated(SurfaceHolder holder) { try { stopPreview(); mCamera.setPreviewDisplay(holder); } catch (IOException exception) { mCamera.release(); mCamera = null; } } public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { if (isPreviewOn) mCamera.stopPreview(); handleSurfaceChanged(); startPreview(); mCamera.autoFocus(null); } @Override public void surfaceDestroyed(SurfaceHolder holder) { try { mHolder.addCallback(null); mCamera.setPreviewCallback(null); } catch (RuntimeException e) { } } public void startPreview() { if (!isPreviewOn && mCamera != null) { isPreviewOn = true; mCamera.startPreview(); } } public void stopPreview() { if (isPreviewOn && mCamera != null) { isPreviewOn = false; mCamera.stopPreview(); } } @Override public void onPreviewFrame(byte[] data, Camera camera) { //计算时间戳 long frameTimeStamp = 0L; if(mAudioTimestamp == 0L && firstTime > 0L) frameTimeStamp = 1000L * (System.currentTimeMillis() -firstTime); else if (mLastAudioTimestamp == mAudioTimestamp) frameTimeStamp = mAudioTimestamp + frameTime; else { long l2 = (System.nanoTime() - mAudioTimeRecorded) / 1000L; frameTimeStamp = l2 + mAudioTimestamp; mLastAudioTimestamp = mAudioTimestamp; } //录制视频 synchronized (mVideoRecordLock) { if (recording && rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null) { //保存第一幀的图片 if(isFirstFrame){ isFirstFrame = false; Message msg = mHandler.obtainMessage(1); msg.obj = data; msg.what = 1; mHandler.sendMessage(msg); } totalTime = System.currentTimeMillis() - firstTime - pausedTime - ((long) (1.0/(double)frameRate)*1000); if(!nextEnabled && totalTime >= recordingMinimumTime){ nextEnabled = true; nextBtn.setEnabled(true); currentRecorderState = RecorderState.SUCCESS; mHandler.sendEmptyMessage(2); } if(currentRecorderState == RecorderState.PRESS && totalTime >= recordingChangeTime){ currentRecorderState = RecorderState.LOOSEN; mHandler.sendEmptyMessage(2); } mVideoTimestamp += frameTime; if(lastSavedframe.getTimeStamp() > mVideoTimestamp) mVideoTimestamp = lastSavedframe.getTimeStamp(); try { yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData()); videoRecorder.setTimestamp(lastSavedframe.getTimeStamp()); videoRecorder.record(yuvIplImage); } catch (com.googlecode.javacv.FrameRecorder.Exception e) { //Log.i("recorder", "录制错误"+e.getMessage()); e.printStackTrace(); } } lastSavedframe = new SavedFrames(data,frameTimeStamp); } } } @Override public boolean onTouch(View v, MotionEvent event) { if(!recordFinish){ if(totalTime< recordingTime){ switch (event.getAction()) { case MotionEvent.ACTION_DOWN: //如果MediaRecorder没有被初始化 //执行初始化 mHandler.removeMessages(3); mHandler.removeMessages(4); mHandler.sendEmptyMessageDelayed(3,300); break; case MotionEvent.ACTION_UP: mHandler.removeMessages(3); mHandler.removeMessages(4); if(rec) mHandler.sendEmptyMessage(4); break; } }else{ //如果录制时间超过最大时间,保存视频 rec = false; saveRecording(); } } return true; } /** * 关闭摄像头的预览 */ public void stopPreview() { if (isPreviewOn && mCamera != null) { isPreviewOn = false; mCamera.stopPreview(); } } private void handleSurfaceChanged() { //获取摄像头的所有支持的分辨率 List<Camera.Size> resolutionList = Util.getResolutionList(mCamera); if(resolutionList != null && resolutionList.size() > 0){ Collections.sort(resolutionList, new Util.ResolutionComparator()); Camera.Size previewSize = null; if(defaultScreenResolution == -1){ boolean hasSize = false; //如果摄像头支持640*480,那么强制设为640*480 for(int i = 0;i<resolutionList.size();i++){ Size size = resolutionList.get(i); if(size != null && size.width==640 && size.height==480){ previewSize = size; hasSize = true; break; } } //如果不支持设为中间的那个 if(!hasSize){ int mediumResolution = resolutionList.size()/2; if(mediumResolution >= resolutionList.size()) mediumResolution = resolutionList.size() - 1; previewSize = resolutionList.get(mediumResolution); } }else{ if(defaultScreenResolution >= resolutionList.size()) defaultScreenResolution = resolutionList.size() - 1; previewSize = resolutionList.get(defaultScreenResolution); } //获取计算过的摄像头分辨率 if(previewSize != null ){ previewWidth = previewSize.width; previewHeight = previewSize.height; cameraParameters.setPreviewSize(previewWidth, previewHeight); if(videoRecorder != null) { videoRecorder.setImageWidth(previewWidth); videoRecorder.setImageHeight(previewHeight); } } } //设置预览帧率 cameraParameters.setPreviewFrameRate(frameRate); //构建一个IplImage对象,用于录制视频 //和opencv中的cvCreateImage方法一样 yuvIplImage = IplImage.create(previewWidth, previewHeight, IPL_DEPTH_8U, 2); //系统版本为8一下的不支持这种对焦 if(Build.VERSION.SDK_INT > Build.VERSION_CODES.FROYO) { mCamera.setDisplayOrientation(Util.determineDisplayOrientation(FFmpegRecorderActivity.this, defaultCameraId)); List<String> focusModes = cameraParameters.getSupportedFocusModes(); if (focusModes != null && focusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO)) { cameraParameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO); } } else mCamera.setDisplayOrientation(90); mCamera.setParameters(cameraParameters); } @Override public void onClick(View v) { //下一步 if(v.getId() == R.id.recorder_next){ if (isRecordingStarted) { rec = false; if(totalTime >= recordingMinimumTime) saveRecording(); else showCancellDialog(); }else initiateRecording(false); }else if(v.getId() == R.id.recorder_flashlight){ //闪光灯 if(isFlashOn){ isFlashOn = false; cameraParameters.setFlashMode(Parameters.FLASH_MODE_OFF); } else{ isFlashOn = true; cameraParameters.setFlashMode(Parameters.FLASH_MODE_TORCH); } mCamera.setParameters(cameraParameters); }else if(v.getId() == R.id.recorder_frontcamera){ //转换摄像头 cameraSelection = ((cameraSelection == CameraInfo.CAMERA_FACING_BACK) ? CameraInfo.CAMERA_FACING_FRONT:CameraInfo.CAMERA_FACING_BACK); initCameraLayout(); if(cameraSelection == CameraInfo.CAMERA_FACING_FRONT) flashIcon.setVisibility(View.GONE); else{ flashIcon.setVisibility(View.VISIBLE); if(isFlashOn){ cameraParameters.setFlashMode(Parameters.FLASH_MODE_TORCH); mCamera.setParameters(cameraParameters); } } }else if(v.getId() == R.id.recorder_cancel) videoTheEnd(false); } /** * 结束录制 * @param isSuccess */ public void videoTheEnd(boolean isSuccess) { releaseResources(); if(fileVideoPath != null && fileVideoPath.exists() && !isSuccess) fileVideoPath.delete(); returnToCaller(isSuccess); } /** * 设置返回结果 * @param valid */ private void returnToCaller(boolean valid) { try{ setActivityResult(valid); if(valid){ Intent intent = new Intent(this,FFmpegPreviewActivity.class); intent.putExtra("path", strVideoPath); intent.putExtra("imagePath", imagePath); startActivity(intent); } } catch (Throwable e) { }finally{ finish(); } } private void setActivityResult(boolean valid) { Intent resultIntent = new Intent(); int resultCode; if (valid) { resultCode = RESULT_OK; resultIntent.setData(uriVideoPath); } else resultCode = RESULT_CANCELED; setResult(resultCode, resultIntent); } /** * 向系统注册我们录制的视频文件,这样文件才会在sd卡中显示 */ private void registerVideo() { Uri videoTable = Uri.parse(CONSTANTS.VIDEO_CONTENT_URI); Util.videoContentValues.put(Video.Media.SIZE, new File(strVideoPath).length()); try{ uriVideoPath = getContentResolver().insert(videoTable, Util.videoContentValues); } catch (Throwable e){ uriVideoPath = null; strVideoPath = null; e.printStackTrace(); } finally{} Util.videoContentValues = null; } /** * 保存录制的视频文件 */ private void saveRecording() { if(isRecordingStarted){ runAudioThread = false; if(!isRecordingSaved){ isRecordingSaved = true; new AsyncStopRecording().execute(); } }else{ videoTheEnd(false); } } /** * 求出录制的总时间 */ private synchronized void setTotalVideoTime(){ if(totalTime > 0) txtTimer.setText(Util.getRecordingTimeFromMillis(totalTime)); } /** * 释放资源,停止录制视频和音频 */ private void releaseResources(){ isRecordingSaved = true; try { if(videoRecorder != null) { videoRecorder.stop(); videoRecorder.release(); } } catch (com.googlecode.javacv.FrameRecorder.Exception e) { e.printStackTrace(); } yuvIplImage = null; videoRecorder = null; lastSavedframe = null; progressView.putProgressList((int) totalTime); //停止刷新进度 progressView.setCurrentState(State.PAUSE); } /** * 第一次按下时,初始化录制数据 * @param isActionDown */ private void initiateRecording(boolean isActionDown) { isRecordingStarted = true; firstTime = System.currentTimeMillis(); recording = true; totalPauseTime = 0; pausedTime = 0; txtTimer.setVisibility(View.VISIBLE); handler.removeCallbacks(mUpdateTimeTask); handler.postDelayed(mUpdateTimeTask, 100); } public static enum RecorderState { PRESS(1),LOOSEN(2),CHANGE(3),SUCCESS(4); static RecorderState mapIntToValue(final int stateInt) { for (RecorderState value : RecorderState.values()) { if (stateInt == value.getIntValue()) { return value; } } return PRESS; } private int mIntValue; RecorderState(int intValue) { mIntValue = intValue; } int getIntValue() { return mIntValue; } } } 下面是自己实现的进度条,代码还存在很多问题。如果有人知道更好的实现,请联系我,非常感谢。 package com.qd.recorder; import java.util.Iterator; import java.util.LinkedList; import android.app.Activity; import android.content.Context; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import android.util.AttributeSet; import android.util.DisplayMetrics; import android.view.View; public class ProgressView extends View { public ProgressView(Context context) { super(context); init(context); } public ProgressView(Context paramContext, AttributeSet paramAttributeSet) { super(paramContext, paramAttributeSet); init(paramContext); } public ProgressView(Context paramContext, AttributeSet paramAttributeSet, int paramInt) { super(paramContext, paramAttributeSet, paramInt); init(paramContext); } private Paint progressPaint, firstPaint, threePaint,breakPaint;//三个颜色的画笔 private float firstWidth = 4f, threeWidth = 1f;//断点的宽度 private LinkedList<Integer> linkedList = new LinkedList<Integer>(); private float perPixel = 0l; private float countRecorderTime = 8000;//总的录制时间 public void setTotalTime(float time){ countRecorderTime = time; } private void init(Context paramContext) { progressPaint = new Paint(); firstPaint = new Paint(); threePaint = new Paint(); breakPaint = new Paint(); // 背景 setBackgroundColor(Color.parseColor("#19000000")); // 主要进度的颜色 progressPaint.setStyle(Paint.Style.FILL); progressPaint.setColor(Color.parseColor("#19e3cf")); // 一闪一闪的黄色进度 firstPaint.setStyle(Paint.Style.FILL); firstPaint.setColor(Color.parseColor("#ffcc42")); // 3秒处的进度 threePaint.setStyle(Paint.Style.FILL); threePaint.setColor(Color.parseColor("#12a899")); breakPaint.setStyle(Paint.Style.FILL); breakPaint.setColor(Color.parseColor("#000000")); DisplayMetrics dm = new DisplayMetrics(); ((Activity)paramContext).getWindowManager().getDefaultDisplay().getMetrics(dm); perPixel = dm.widthPixels/countRecorderTime; perSecProgress = perPixel; } /** * 绘制状态 * @author QD * */ public static enum State { START(0x1),PAUSE(0x2); static State mapIntToValue(final int stateInt) { for (State value : State.values()) { if (stateInt == value.getIntValue()) { return value; } } return PAUSE; } private int mIntValue; State(int intValue) { mIntValue = intValue; } int getIntValue() { return mIntValue; } } private State currentState = State.PAUSE;//当前状态 private boolean isVisible = true;//一闪一闪的黄色区域是否可见 private float countWidth = 0;//每次绘制完成,进度条的长度 private float perProgress = 0;//手指按下时,进度条每次增长的长度 private float perSecProgress = 0;//每毫秒对应的像素点 private long initTime;//绘制完成时的时间戳 private long drawFlashTime = 0;//闪动的黄色区域时间戳 protected void onDraw(Canvas canvas) { super.onDraw(canvas); long curTime = System.currentTimeMillis(); //Log.i("recorder", curTime - initTime + ""); countWidth = 0; //每次绘制都将队列中的断点的时间顺序,绘制出来 if(!linkedList.isEmpty()){ float frontTime = 0; Iterator<Integer> iterator = linkedList.iterator(); while(iterator.hasNext()){ int time = iterator.next(); //求出本次绘制矩形的起点位置 float left = countWidth; //求出本次绘制矩形的终点位置 countWidth += (time-frontTime)*perPixel; //绘制进度条 canvas.drawRect(left, 0,countWidth,getMeasuredHeight(),progressPaint); //绘制断点 canvas.drawRect(countWidth, 0,countWidth + threeWidth,getMeasuredHeight(),breakPaint); countWidth += threeWidth; frontTime = time; } //绘制三秒处的断点 if(linkedList.getLast() <= 3000) canvas.drawRect(perPixel*3000, 0,perPixel*3000+threeWidth,getMeasuredHeight(),threePaint); }else//绘制三秒处的断点 canvas.drawRect(perPixel*3000, 0,perPixel*3000+threeWidth,getMeasuredHeight(),threePaint);//绘制三秒处的矩形 //当手指按住屏幕时,进度条会增长 if(currentState == State.START){ perProgress += perSecProgress*(curTime - initTime ); if(countWidth + perProgress <= getMeasuredWidth()) canvas.drawRect(countWidth, 0,countWidth + perProgress,getMeasuredHeight(),progressPaint); else canvas.drawRect(countWidth, 0,getMeasuredWidth(),getMeasuredHeight(),progressPaint); } //绘制一闪一闪的黄色区域,每500ms闪动一次 if(drawFlashTime==0 || curTime - drawFlashTime >= 500){ isVisible = !isVisible; drawFlashTime = System.currentTimeMillis(); } if(isVisible){ if(currentState == State.START) canvas.drawRect(countWidth + perProgress, 0,countWidth + firstWidth + perProgress,getMeasuredHeight(),firstPaint); else canvas.drawRect(countWidth, 0,countWidth + firstWidth,getMeasuredHeight(),firstPaint); } //结束绘制一闪一闪的黄色区域 initTime = System.currentTimeMillis(); invalidate(); } /** * 设置进度条的状态 * @param state */ public void setCurrentState(State state){ currentState = state; if(state == State.PAUSE) perProgress = perSecProgress; } /** * 手指抬起时,将时间点保存到队列中 * @param time:ms为单位 */ public void putProgressList(int time) { linkedList.add(time); } }

 

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/qdrzwd2/article/details/84547751

智能推荐

什么是内部类?成员内部类、静态内部类、局部内部类和匿名内部类的区别及作用?_成员内部类和局部内部类的区别-程序员宅基地

文章浏览阅读3.4k次,点赞8次,收藏42次。一、什么是内部类?or 内部类的概念内部类是定义在另一个类中的类;下面类TestB是类TestA的内部类。即内部类对象引用了实例化该内部对象的外围类对象。public class TestA{ class TestB {}}二、 为什么需要内部类?or 内部类有什么作用?1、 内部类方法可以访问该类定义所在的作用域中的数据,包括私有数据。2、内部类可以对同一个包中的其他类隐藏起来。3、 当想要定义一个回调函数且不想编写大量代码时,使用匿名内部类比较便捷。三、 内部类的分类成员内部_成员内部类和局部内部类的区别

分布式系统_分布式系统运维工具-程序员宅基地

文章浏览阅读118次。分布式系统要求拆分分布式思想的实质搭配要求分布式系统要求按照某些特定的规则将项目进行拆分。如果将一个项目的所有模板功能都写到一起,当某个模块出现问题时将直接导致整个服务器出现问题。拆分按照业务拆分为不同的服务器,有效的降低系统架构的耦合性在业务拆分的基础上可按照代码层级进行拆分(view、controller、service、pojo)分布式思想的实质分布式思想的实质是为了系统的..._分布式系统运维工具

用Exce分析l数据极简入门_exce l趋势分析数据量-程序员宅基地

文章浏览阅读174次。1.数据源准备2.数据处理step1:数据表处理应用函数:①VLOOKUP函数; ② CONCATENATE函数终表:step2:数据透视表统计分析(1) 透视表汇总不同渠道用户数, 金额(2)透视表汇总不同日期购买用户数,金额(3)透视表汇总不同用户购买订单数,金额step3:讲第二步结果可视化, 比如, 柱形图(1)不同渠道用户数, 金额(2)不同日期..._exce l趋势分析数据量

宁盾堡垒机双因素认证方案_horizon宁盾双因素配置-程序员宅基地

文章浏览阅读3.3k次。堡垒机可以为企业实现服务器、网络设备、数据库、安全设备等的集中管控和安全可靠运行,帮助IT运维人员提高工作效率。通俗来说,就是用来控制哪些人可以登录哪些资产(事先防范和事中控制),以及录像记录登录资产后做了什么事情(事后溯源)。由于堡垒机内部保存着企业所有的设备资产和权限关系,是企业内部信息安全的重要一环。但目前出现的以下问题产生了很大安全隐患:密码设置过于简单,容易被暴力破解;为方便记忆,设置统一的密码,一旦单点被破,极易引发全面危机。在单一的静态密码验证机制下,登录密码是堡垒机安全的唯一_horizon宁盾双因素配置

谷歌浏览器安装(Win、Linux、离线安装)_chrome linux debian离线安装依赖-程序员宅基地

文章浏览阅读7.7k次,点赞4次,收藏16次。Chrome作为一款挺不错的浏览器,其有着诸多的优良特性,并且支持跨平台。其支持(Windows、Linux、Mac OS X、BSD、Android),在绝大多数情况下,其的安装都很简单,但有时会由于网络原因,无法安装,所以在这里总结下Chrome的安装。Windows下的安装:在线安装:离线安装:Linux下的安装:在线安装:离线安装:..._chrome linux debian离线安装依赖

烤仔TVの尚书房 | 逃离北上广?不如押宝越南“北上广”-程序员宅基地

文章浏览阅读153次。中国发达城市榜单每天都在刷新,但无非是北上广轮流坐庄。北京拥有最顶尖的文化资源,上海是“摩登”的国际化大都市,广州是活力四射的千年商都。GDP和发展潜力是衡量城市的数字指...

随便推点

java spark的使用和配置_使用java调用spark注册进去的程序-程序员宅基地

文章浏览阅读3.3k次。前言spark在java使用比较少,多是scala的用法,我这里介绍一下我在项目中使用的代码配置详细算法的使用请点击我主页列表查看版本jar版本说明spark3.0.1scala2.12这个版本注意和spark版本对应,只是为了引jar包springboot版本2.3.2.RELEASEmaven<!-- spark --> <dependency> <gro_使用java调用spark注册进去的程序

汽车零部件开发工具巨头V公司全套bootloader中UDS协议栈源代码,自己完成底层外设驱动开发后,集成即可使用_uds协议栈 源代码-程序员宅基地

文章浏览阅读4.8k次。汽车零部件开发工具巨头V公司全套bootloader中UDS协议栈源代码,自己完成底层外设驱动开发后,集成即可使用,代码精简高效,大厂出品有量产保证。:139800617636213023darcy169_uds协议栈 源代码

AUTOSAR基础篇之OS(下)_autosar 定义了 5 种多核支持类型-程序员宅基地

文章浏览阅读4.6k次,点赞20次,收藏148次。AUTOSAR基础篇之OS(下)前言首先,请问大家几个小小的问题,你清楚:你知道多核OS在什么场景下使用吗?多核系统OS又是如何协同启动或者关闭的呢?AUTOSAR OS存在哪些功能安全等方面的要求呢?多核OS之间的启动关闭与单核相比又存在哪些异同呢?。。。。。。今天,我们来一起探索并回答这些问题。为了便于大家理解,以下是本文的主题大纲:[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JCXrdI0k-1636287756923)(https://gite_autosar 定义了 5 种多核支持类型

VS报错无法打开自己写的头文件_vs2013打不开自己定义的头文件-程序员宅基地

文章浏览阅读2.2k次,点赞6次,收藏14次。原因:自己写的头文件没有被加入到方案的包含目录中去,无法被检索到,也就无法打开。将自己写的头文件都放入header files。然后在VS界面上,右键方案名,点击属性。将自己头文件夹的目录添加进去。_vs2013打不开自己定义的头文件

【Redis】Redis基础命令集详解_redis命令-程序员宅基地

文章浏览阅读3.3w次,点赞80次,收藏342次。此时,可以将系统中所有用户的 Session 数据全部保存到 Redis 中,用户在提交新的请求后,系统先从Redis 中查找相应的Session 数据,如果存在,则再进行相关操作,否则跳转到登录页面。此时,可以将系统中所有用户的 Session 数据全部保存到 Redis 中,用户在提交新的请求后,系统先从Redis 中查找相应的Session 数据,如果存在,则再进行相关操作,否则跳转到登录页面。当数据量很大时,count 的数量的指定可能会不起作用,Redis 会自动调整每次的遍历数目。_redis命令

URP渲染管线简介-程序员宅基地

文章浏览阅读449次,点赞3次,收藏3次。URP的设计目标是在保持高性能的同时,提供更多的渲染功能和自定义选项。与普通项目相比,会多出Presets文件夹,里面包含着一些设置,包括本色,声音,法线,贴图等设置。全局只有主光源和附加光源,主光源只支持平行光,附加光源数量有限制,主光源和附加光源在一次Pass中可以一起着色。URP:全局只有主光源和附加光源,主光源只支持平行光,附加光源数量有限制,一次Pass可以计算多个光源。可编程渲染管线:渲染策略是可以供程序员定制的,可以定制的有:光照计算和光源,深度测试,摄像机光照烘焙,后期处理策略等等。_urp渲染管线

推荐文章

热门文章

相关标签