基于NVIDIA TX2的usb basyer工业相机编码推流_tegra tx2 编码-程序员宅基地

技术标签: NVIDIA TK1/TX1/TX2/Xavier  nvidia-tx2  Yolo  nvidia-tx1  Decode/Encode/Push/Pull stream  编解码  

性能:两路2448*2048 basyer相机编码,编码后25fps,延迟200ms左右

编码:调用nvidia底层的api, 参考tegra_multimedia_api下的cuda_encode例子。只需要修改read_from_file()函数的数据为相机输入数据即可

目前只支持输入I420格式,但是basyer相机没有I420的格式,用cuda做转换,代码如下:

// Grab.cpp
/*
    Note: Before getting started, Basler recommends reading the Programmer's Guide topic
    in the pylon C++ API documentation that gets installed with pylon.
    If you are upgrading to a higher major version of pylon, Basler also
    strongly recommends reading the Migration topic in the pylon C++ API documentation.

    This sample illustrates how to grab and process images using the CInstantCamera class.
    The images are grabbed and processed asynchronously, i.e.,
    while the application is processing a buffer, the acquisition of the next buffer is done
    in parallel.

    The CInstantCamera class uses a pool of buffers to retrieve image data
    from the camera device. Once a buffer is filled and ready,
    the buffer can be retrieved from the camera object for processing. The buffer
    and additional image data are collected in a grab result. The grab result is
    held by a smart pointer after retrieval. The buffer is automatically reused
    when explicitly released or when the smart pointer object is destroyed.
*/

// Include files to use the PYLON API.

#include <pylon/PylonIncludes.h>
#ifdef PYLON_WIN_BUILD
#    include <pylon/PylonGUI.h>
#endif
#include"opencv2/opencv.hpp"
#include"opencv2/highgui.hpp"

#include<sys/time.h>
#include"opencv2/core/core.hpp"
#include <pylon/gige/BaslerGigECamera.h>
#include <cuda_runtime.h>
#include "cudaYUV.h"
#include "cudaRGB.h"
//定义是否保存图片 0-否 1-是  
#define saveImages 0  
//定义是否记录视频 0-否 1-是  
#define recordVideo 0 

// Namespace for using pylon objects.
using namespace Pylon;

// Namespace for using cout.
using namespace std;
#include "opencv2/gpu/gpu.hpp"

using namespace cv;
// Number of images to be grabbed.
static const uint32_t c_countOfImagesToGrab = 1000000;

unsigned char *I420Buffer=NULL;
unsigned char *BGRABuffer=NULL;
unsigned char *RGBBuffer=NULL;

/*void BGR_YUV420(unsigned char*bsrc,unsigned char *bdst,int width,int height)
{

}*/
int main(int argc, char* argv[])
{
    // The exit code of the sample application.
    int exitCode = 0;

    // Before using any pylon methods, the pylon runtime must be initialized. 

    Pylon::PylonAutoInitTerm autoInitTerm;  
    try  
    {  
        //创建相机对象(以最先识别的相机)  
        CInstantCamera camera(CTlFactory::GetInstance().CreateFirstDevice());  
        // 打印相机的名称  
        std::cout << "Using device " << camera.GetDeviceInfo().GetModelName() << endl;  
        //获取相机节点映射以获得相机参数  
        GenApi::INodeMap& nodemap = camera.GetNodeMap();  
        //打开相机  
        camera.Open();  
        //获取相机成像宽度和高度  
        GenApi::CIntegerPtr width = nodemap.GetNode("Width");  
        GenApi::CIntegerPtr height = nodemap.GetNode("Height");  

        //设置相机最大缓冲区,默认为10  
        camera.MaxNumBuffer = 5;  
        // 新建pylon ImageFormatConverter对象.  
        CImageFormatConverter formatConverter;  
        //确定输出像素格式  
        formatConverter.OutputPixelFormat = PixelType_RGB8packed;  
        // 创建一个Pylonlmage后续将用来创建OpenCV images  
        CPylonImage pylonImage;  

        //声明一个整形变量用来计数抓取的图像,以及创建文件名索引  
        int grabbedlmages = 0;  

        // 新建一个OpenCV video creator对象.  
        VideoWriter cvVideoCreator;  

        //新建一个OpenCV image对象.  
        Mat openCvImage;  
        Mat dst_image;
        // 视频文件名  
        std::string videoFileName = "openCvVideo.avi";  

        // 定义视频帧大小  
        cv::Size frameSize = Size((int)width->GetValue(), (int)height->GetValue());  

        //设置视频编码类型和帧率,有三种选择  
        // 帧率必须小于等于相机成像帧率  
        cvVideoCreator.open(videoFileName, CV_FOURCC('D', 'I', 'V','X'), 10, frameSize, true);  
        //cvVideoCreator.open(videoFileName, CV_F0URCC('M','P',,4','2’), 20, frameSize, true);  
        //cvVideoCreator.open(videoFileName, CV_FOURCC('M', '3', 'P', 'G'), 20, frameSize, true);  

        // 开始抓取c_countOfImagesToGrab images.      
        //相机默认设置连续抓取模式  
        camera.StartGrabbing(c_countOfImagesToGrab, GrabStrategy_LatestImageOnly);  

        //抓取结果数据指针  
        CGrabResultPtr ptrGrabResult;  

        // 当c_countOfImagesToGrab images获取恢复成功时,Camera.StopGrabbing()  
        //被RetrieveResult()方法自动调用停止抓取  

        FILE *fp=fopen("1.yuv","w+");
        struct timeval start,end;
        while (camera.IsGrabbing())  

        {  
            // 等待接收和恢复图像,超时时间设置为5000 ms.  
            camera.RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);  

            //如果图像抓取成功  
            if (ptrGrabResult->GrabSucceeded())  
            {  
                // 获取图像数据  
                int m_width=ptrGrabResult->GetWidth();
                int m_height=ptrGrabResult->GetHeight();
                cout <<"SizeX: "<<m_width<<endl;  
                cout <<"SizeY: "<<m_height<<endl;  

                //将抓取的缓冲数据转化成pylon image.  
                formatConverter.Convert(pylonImage, ptrGrabResult);  

                // 将 pylon image转成OpenCV image.  
                //openCvImage = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());  

                //cvtColor(openCvImage,dst_image,CV_RGB2RGBA);
        #if 0   
            unsigned long long timeusd=0;
                gettimeofday(&start,NULL);
                cvtColor(openCvImage,dst_image,CV_BGR2YUV_I420);
                gettimeofday(&end,NULL);
                //timeusd=end.tv_sec-start.tv_sec+(end.tv_usec-start.tv_usec)/1000000;
                cout<<"convert cost "<<end.tv_sec-start.tv_sec+(end.tv_usec-start.tv_usec)/1000000<< " sec"<<endl<<endl;
        #endif

            #if 1
            if(!I420Buffer)
            {
                if(CUDA_FAILED(cudaMalloc((void**)&I420Buffer,m_height*m_width*3/2)))
                {
                    cout<<"cudaMalloc For I420Buffer Error"<<endl;
                }
            }
            #endif

            #if 1
            if(!RGBBuffer)
            {
                if(CUDA_FAILED(cudaMalloc((void**)&RGBBuffer,m_height*m_width*3)))
                {
                    cout<<"cudaMalloc For RGBBuffer Error"<<endl;
                }
            }
            #endif


            #if 1
            if(!BGRABuffer)
            {
                if(CUDA_FAILED(cudaMalloc((void**)&BGRABuffer,m_height*m_width*4)))
                {
                    cout<<"cudaMalloc For BGRABuffer Error"<<endl;
                }       
            }
            #endif

            cudaMemcpy(RGBBuffer,pylonImage.GetBuffer(),m_height*m_width*3,cudaMemcpyHostToDevice);
            #if 1
            if(CUDA_FAILED(cudaRGBToRGBAf((uchar3 *)RGBBuffer,(float4 *)BGRABuffer, m_width,m_height)))
            {
                cout<<"Failed to Convert RGB2RGBA"<<endl;
            }
            #endif

            #if 1
            //cudaMemcpy(BGRABuffer,dst_image.data,m_height*m_width*4,cudaMemcpyHostToDevice);

            if(CUDA_FAILED(cudaRGBAToI420((uchar4 *)BGRABuffer,I420Buffer,m_width,m_height)))
            {
                cout<<"Failed to Convert RGBAToI420"<<endl;     
            }
            #endif

            unsigned char g_buffer[m_height*m_width*3/2];
            #if 1
            cudaMemcpy(g_buffer,I420Buffer,m_width*m_height*3/2,cudaMemcpyDeviceToHost);
            fwrite(g_buffer,2448*2048*3/2,1,fp);
            #endif
                //如果需要保存图片  
                if (saveImages)  
                {  
                   std::ostringstream s;              
                    // 按索引定义文件名存储图片  
                   s << "image_" << grabbedlmages << ".jpg";  
                   std::string imageName(s.str());  
                    //保存OpenCV image.  
                   cv::imwrite(imageName, openCvImage);  
                   grabbedlmages++;  
                }  

                //如果需要记录视频  
                if (recordVideo)  
                {  
                    cvVideoCreator.write(openCvImage);  
                }  

                //新建OpenCV display window.  
                //cv::namedWindow("OpenCV Display Window", CV_WINDOW_NORMAL); // other options: CV_AUTOSIZE, CV_FREERATIO  
                //显示及时影像.  
                //cv::imshow("OpenCV Display Window", dst_image);  

                // Define a timeout for customer's input in  
                // '0' means indefinite, i.e. the next image will be displayed after closing the window.  
                // '1' means live stream  
                waitKey(1);  

            }  

        }  

        fclose(fp); 

    }  
    catch (GenICam::GenericException &e)  
    {  
        // Error handling.  
        cerr << "An exception occurred." << endl  
            << e.GetDescription() << endl;  
    }  
    return exitCode;  
}  

转换完成,加入编码推流的代码,推流是利用EasyDarwin做流媒体转发服务器:

// Grab.cpp
/*
    Note: Before getting started, Basler recommends reading the Programmer's Guide topic
    in the pylon C++ API documentation that gets installed with pylon.
    If you are upgrading to a higher major version of pylon, Basler also
    strongly recommends reading the Migration topic in the pylon C++ API documentation.

    This sample illustrates how to grab and process images using the CInstantCamera class.
    The images are grabbed and processed asynchronously, i.e.,
    while the application is processing a buffer, the acquisition of the next buffer is done
    in parallel.

    The CInstantCamera class uses a pool of buffers to retrieve image data
    from the camera device. Once a buffer is filled and ready,
    the buffer can be retrieved from the camera object for processing. The buffer
    and additional image data are collected in a grab result. The grab result is
    held by a smart pointer after retrieval. The buffer is automatically reused
    when explicitly released or when the smart pointer object is destroyed.
*/

// Include files to use the PYLON API.

//gst_rtsp

#include "stdio.h"
#include <unistd.h>
#include <stdlib.h>
#include <iostream>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
#include <string.h>
#include <signal.h>
#include <errno.h>
#include <ctype.h>
#include <time.h>
#include <unistd.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <netinet/tcp.h>  /* for TCP_NODELAY  */
#include <arpa/inet.h>
#include <sys/uio.h>
#include <sys/time.h>
#include <pthread.h>
#include <sched.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <linux/fb.h>
#include <fcntl.h>
#include <semaphore.h>
#include <net/if.h>
#include <sys/ipc.h>
#include <sys/msg.h>
#include <sys/vfs.h>
#include <pthread.h>
#include "assert.h"
#include <netdb.h>
#include <malloc.h>

#include <deque>
#include <list>
#include <string>


#include <stdio.h>
#include <signal.h>
#include <unistd.h>


#include "video_encode.h"

#include "libRtspPusherAPI.h"






#include <pylon/PylonIncludes.h>
#ifdef PYLON_WIN_BUILD
#    include <pylon/PylonGUI.h>
#endif
#include"opencv2/opencv.hpp"
#include"opencv2/highgui.hpp"

#include<sys/time.h>
#include"opencv2/core/core.hpp"
#include <pylon/gige/BaslerGigECamera.h>


//定义是否保存图片 0-否 1-是  
#define saveImages 0  
//定义是否记录视频 0-否 1-是  
#define recordVideo 0 

// Namespace for using pylon objects.
using namespace Pylon;

// Namespace for using cout.
using namespace std;
#include "opencv2/gpu/gpu.hpp"

using namespace cv;
// Number of images to be grabbed.
static const uint32_t c_countOfImagesToGrab = 1000000;

bool signal_recieved = false;

static int m_Width=0;
static int m_Height=0;

void sig_handler(int signo)
{
    if( signo == SIGINT )
    {
        printf("received SIGINT\n");
        signal_recieved = true;
    }
}

#define TEST_CHANNEL_NUM              5        // ����ͨ����
#define TEST_URL_MAX_LEN              128      // URL ��?�
#define TEST_LOOP_MAX_NUM             1000000  // ��������ѭ����
#define TEST_SINGLE_LOOP_TIME         30       // ����ѭ��ʱ��

typedef struct {
    char pullStreamUrl[TEST_URL_MAX_LEN+1];
    char pushStreamRtspUrl[TEST_URL_MAX_LEN+1];
    int  pullStreamSuccess;
    void* rtspPushStreamHandle;
} TestChannelConfigure;


#define TEST_CHANNEL_MAX_NUM          1        // ֧��ͨ�������

TestChannelConfigure testChannelConfigure[TEST_CHANNEL_MAX_NUM] = {
    {
   "admin:buaa123456@rtsp://192.168.1.190","rtsp://192.168.5.43/110.sdp",0,NULL}
};


#define  ENC_MAX_BUFFER   5

sem_t sem_dec_output_read;
sem_t sem_dec_output_write;
pthread_mutex_t mutex;
void *g_buffer = NULL;
int  g_iReadBufferPos = 0;

int encInput(unsigned char *Y, unsigned char *U, unsigned char*V, int *width, int *height, NvBuffer *buffer, void *pUserData)
{
    int i, j = 0;
    sem_wait(&sem_dec_output_read);
    pthread_mutex_lock(&mutex);

    for (i = 0; i < buffer->n_planes; i++)
    {
        NvBuffer::NvBufferPlane &plane = buffer->planes[i];

        //NvBuffer::NvBufferPlane &plane_dec = User->m_buffer->planes[i];

        unsigned char *data = plane.data;

        unsigned int bytes_to_read =
            plane.fmt.bytesperpixel * plane.fmt.width;

        /*
        ---DEBUG--[av_stream.cpp:  encInput: 2089]--[2017-07-01 07:12:45]--  encInput() channid[0] dec_buffer[1920 * 1080] 
        buffer.n_planes[3]              
        plane.fmt.bytesperpixel[1] 
        plane.fmt.width[1920] 
        plane.fmt.height[1080]  
        plane.fmt.stride[2048] 
        plane.bytesused[0]  
        ---DEBUG--[av_stream.cpp:  encInput: 2089]--[2017-07-01 07:12:45]--  encInput() channid[0] dec_buffer[1920 * 1080] 
        buffer.n_planes[3]              
        plane.fmt.bytesperpixel[1] 
        plane.fmt.width[960] 
        plane.fmt.height[540]  
        plane.fmt.stride[1024] 
        plane.bytesused[0]  
        ---DEBUG--[av_stream.cpp:  encInput: 2089]--[2017-07-01 07:12:45]--  encInput() channid[0] dec_buffer[1920 * 1080] 
        buffer.n_planes[3]              
        plane.fmt.bytesperpixel[1] 
        plane.fmt.width[960] 
        plane.fmt.height[540]  
        plane.fmt.stride[1024] 
        plane.bytesused[0]  
        */

        plane.bytesused = 0;
        for (j = 0; j < plane.fmt.height; j++)
        {
            memcpy(data, (g_buffer + g_iReadBufferPos), bytes_to_read);
            data += plane.fmt.stride;
            g_iReadBufferPos += bytes_to_read;
        }

        plane.bytesused = plane.fmt.stride * plane.fmt.height;  

        printf( "i = [%d]  \nRes[%d*%d]  \nbytesperpixel[%d]  \nstride[%d] \nplane.bytesused[%d] \n", 
            i, plane.fmt.width, plane.fmt.height, plane.fmt.bytesperpixel, plane.fmt.stride, plane.bytesused);
    }

    g_buffer = NULL;
    g_iReadBufferPos = 0;
    pthread_mutex_unlock(&mutex);
    sem_post(&sem_dec_output_write);

    return 0;
}

FILE *fp4 = NULL;
int encOutput(char *pBuf, int iLen, void *pDataTypePara, void *pUserData)
{
    printf("............func:%s  Line:%d 0x%2x%2x%2x%2x%2x%2x %2x%2x%2x %2x%2x%2x iLen[%d]   \n", 
            __func__, __LINE__, pBuf[0], pBuf[1], pBuf[2], pBuf[3], pBuf[4], pBuf[5], pBuf[6], pBuf[7], pBuf[8], pBuf[9], pBuf[10], pBuf[11],iLen);
#if 0
    if (NULL == fp4)
    {
        fp4 = fopen("out.264", "wb");
        int ret = fwrite(pBuf, 1, iLen, fp4);

    }
    else
    {
        int ret = fwrite(pBuf, 1, iLen, fp4);   
    }
#endif

#if 1
    RTSP_Pusher_Handler rtspPushStreamHandle = testChannelConfigure[0].rtspPushStreamHandle;
    assert(testChannelConfigure != NULL);
    TmAVFrame avFrame;
    avFrame.frameType = VIDEO_FRAME_TYPE;
    avFrame.frameLen = iLen;
    avFrame.frameData = (unsigned char*)pBuf;
    //memcpy(avFrame.frameData,_pinBuf,avFrame.frameLen);
    int ret = RTSP_Pusher_PushFrame(rtspPushStreamHandle,&avFrame);
    if( ret < 0 ) {
#if 1
        printf("------------channel_%d push stream failed.\n",0);
#endif
    }
#endif



}

int enc_StatueCallBack(int ichannle, int statue, int iNowStreamFlag, void *pUserData)
{
    return 0;
}


int RtspPushStreamLogCB(int logLevel,const char *logInfo,int len)
{
    printf("RtspPushStreamLogCB[%s] \n", logInfo);
    return 1;
}

int RtspPushStreamStatueCB(RTSP_Pusher_Handler handler,RTSP_Pusher_State state, int statusCode, void *context)
{
    printf("RtspPushStreamStatueCB() state[%d]  statusCode[%d] \n", state, statusCode);
    return 1;
}


/*void BGR_YUV420(unsigned char*bsrc,unsigned char *bdst,int width,int height)
{

}*/
int main(int argc, char* argv[])
{
    // The exit code of the sample application.
    int exitCode = 0;
    if(argc!=2)
    {
        std::cout<<"Usage: ./Grab0 rtsp://192.168.1.43(TX2_IP):554/110.sdp"<<endl;
        return -1;
    }
#if 1
    int ret;

    /*rtsp puser*/
    RTSP_Pusher_SetStartPort(50000);
    RTSP_Pusher_SetLogCallBack(RtspPushStreamLogCB);
    RTSP_Pusher_SetStreamStatusCallback(RtspPushStreamStatueCB, NULL);

    TmMediaInfo mediaInfo;
    RTSP_Pusher_Handler rtspPusherHandler = RTSP_Pusher_Create();
    assert(rtspPusherHandler != NULL);
    memset(&mediaInfo,0,sizeof(TmMediaInfo));
    mediaInfo.videoCodec = VIDEO_CODEC_H264;
    mediaInfo.videoFps   = 25;
    #if 0
    ret = RTSP_Pusher_StartStream(rtspPusherHandler,
                                  testChannelConfigure[0].pushStreamRtspUrl,
                                  RTP_OVER_UDP,
                                  NULL,
                                  NULL,
                                  0,
                          &mediaInfo);
    #endif  
    memset(testChannelConfigure[0].pushStreamRtspUrl,0,sizeof(testChannelConfigure[0].pushStreamRtspUrl));
    strcpy(testChannelConfigure[0].pushStreamRtspUrl,argv[1]);
    ret = RTSP_Pusher_StartStream(rtspPusherHandler,
                                  testChannelConfigure[0].pushStreamRtspUrl,
                                  RTP_OVER_UDP,
                                  NULL,
                                  NULL,
                                  0,
                          &mediaInfo);                    
    if(ret < 0 ) {
        printf("fun[%s] Line[%d] chId%d start push stream failed ret[%d].\n",__func__, __LINE__, 0, ret);
        return NULL;
    }
    testChannelConfigure[0].rtspPushStreamHandle = rtspPusherHandler;
#endif


    sem_init(&sem_dec_output_read, 0, 0);
    sem_init(&sem_dec_output_write, 0, 0);
    pthread_mutex_init(&mutex, NULL);
    g_buffer = NULL;
    g_iReadBufferPos = 0;

    TM_video_enc *enc_obj_merge;
    enc_context_t enc_cfg;
    //enc by TM
    enc_obj_merge = new TM_video_enc(0);
    if (NULL == enc_obj_merge)
    {
        return 0;
    }
    enc_obj_merge->init(ENC_MAX_BUFFER);

    memset(&enc_cfg, 0, sizeof(enc_context_t));
    enc_cfg.ratecontrol = V4L2_MPEG_VIDEO_BITRATE_MODE_CBR;
    enc_cfg.iframe_interval = 25;
    enc_cfg.idr_interval = 25;
    enc_cfg.level = V4L2_MPEG_VIDEO_H264_LEVEL_5_1;
    enc_cfg.fps_n = 25;
    enc_cfg.fps_d = 1;
    enc_cfg.num_b_frames = (uint32_t) -1;
    enc_cfg.nMinQpI = (uint32_t)QP_RETAIN_VAL;
    enc_cfg.nMaxQpI = (uint32_t)QP_RETAIN_VAL;
    enc_cfg.nMinQpP = (uint32_t)QP_RETAIN_VAL;
    enc_cfg.nMaxQpP = (uint32_t)QP_RETAIN_VAL;
    enc_cfg.nMinQpB = (uint32_t)QP_RETAIN_VAL;
    enc_cfg.nMaxQpB = (uint32_t)QP_RETAIN_VAL;
    enc_cfg.in_file_path = "tmp.yuv";
    enc_cfg.out_file_path = "tmp.h264";

    if (0/*VIDEO_CODEC_H265 == m_strChannelEncCfg.VideoEncodeType*/)
    {
        enc_cfg.profile = V4L2_MPEG_VIDEO_H265_PROFILE_MAIN10;
        enc_cfg.encoder_pixfmt = V4L2_PIX_FMT_H265;
    }
    else
    {
        enc_cfg.level = V4L2_MPEG_VIDEO_H264_LEVEL_5_0;
        /*����Dz������ѹ���������iframe_interval Ҫ50���ϣ�����profile �ij�HIGH*/
        enc_cfg.profile = V4L2_MPEG_VIDEO_H264_PROFILE_MAIN;/*����ij�HIGH �?��¼��elecard ���Ų���*/
        enc_cfg.encoder_pixfmt = V4L2_PIX_FMT_H264;
    }




    /*create v4l2 enc*/
    //

    Pylon::PylonAutoInitTerm autoInitTerm;  

    try  
    {  
         //创建相机对象(以最先识别的相机)  
        CTlFactory& tlFactory = CTlFactory::GetInstance();

        // Get all attached devices and exit application if
        // two cameras aren't found
           DeviceInfoList_t devices;
            CInstantCamera camera;
           if( tlFactory.EnumerateDevices(devices) == 2 )
           {
              cout << "Found " << devices.size() << " cameras." << endl;
              camera.Attach(tlFactory.CreateDevice( devices[0] ) );
             if(camera.GetDeviceInfo().GetModelName()=="ML500-35C")
            {
                // 打印相机的名称  
                std::cout << "Using device " << camera.GetDeviceInfo().GetModelName() << endl; 
                camera.Open();
            }
             else
             {
                // 打印相机的名称  
                std::cout << "Using device " << camera.GetDeviceInfo().GetModelName() << endl; 
                camera.Open();
             }
           }
           else
           {
                    camera.Attach(tlFactory.CreateDevice( devices[0]));
                    camera.Open(); 

           }  
        GenApi::INodeMap& nodemap = camera.GetNodeMap();  
        //打开相机    
        //获取相机成像宽度和高度  
        GenApi::CIntegerPtr width = nodemap.GetNode("Width");  
        GenApi::CIntegerPtr height = nodemap.GetNode("Height");  

        enc_cfg.width =(int)width->GetValue() /* * g_FaceScaleFactor*/;
        enc_cfg.height =(int)height->GetValue() /* * g_FaceScaleFactor*/;

        enc_cfg.insert_sps_pps_at_idr = true;/*ǿ��ÿһ��I֡ǰ����SPS PPS*/

        enc_cfg.iframe_interval = 25;
        enc_cfg.idr_interval = 25;
        enc_cfg.fps_n = 25;
        enc_cfg.bitrate =  2048 * 1024;

        void *pUser = (void *)"c++  duixiang ";
        enc_obj_merge->set_config(&enc_cfg, encInput, encOutput, enc_StatueCallBack, (void *)pUser);
        enc_obj_merge->start();
        //设置相机最大缓冲区,默认为10  
        camera.MaxNumBuffer = 5;  
        // 新建pylon ImageFormatConverter对象.  
        CImageFormatConverter formatConverter;  
        //确定输出像素格式  
        formatConverter.OutputPixelFormat = PixelType_BGR8packed;  
        // 创建一个Pylonlmage后续将用来创建OpenCV images  
        CPylonImage pylonImage;  


        //声明一个整形变量用来计数抓取的图像,以及创建文件名索引  
        int grabbedlmages = 0;  

        // 新建一个OpenCV video creator对象.  
        VideoWriter cvVideoCreator;  

        //新建一个OpenCV image对象.  
        Mat openCvImage;  
        Mat dst_image;
        // 视频文件名  
        std::string videoFileName = "openCvVideo.avi";  

        // 定义视频帧大小  
        cv::Size frameSize = Size((int)width->GetValue(), (int)height->GetValue());  

        //设置视频编码类型和帧率,有三种选择  
        // 帧率必须小于等于相机成像帧率  
        cvVideoCreator.open(videoFileName, CV_FOURCC('D', 'I', 'V','X'), 10, frameSize, true);  
        //cvVideoCreator.open(videoFileName, CV_F0URCC('M','P',,4','2’), 20, frameSize, true);  
        //cvVideoCreator.open(videoFileName, CV_FOURCC('M', '3', 'P', 'G'), 20, frameSize, true);  

        // 开始抓取c_countOfImagesToGrab images.      
        //相机默认设置连续抓取模式  
        camera.StartGrabbing(c_countOfImagesToGrab, GrabStrategy_LatestImageOnly);  

        //抓取结果数据指针  
        CGrabResultPtr ptrGrabResult;  

        // 当c_countOfImagesToGrab images获取恢复成功时,Camera.StopGrabbing()  
        //被RetrieveResult()方法自动调用停止抓取  
        unsigned long  timeusd=0;
        FILE *fp=fopen("1.yuv","w+");
        struct timeval start,end;
        while (camera.IsGrabbing())  

        {  
            // 等待接收和恢复图像,超时时间设置为5000 ms.  
            camera.RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);  

            //如果图像抓取成功  
            if (ptrGrabResult->GrabSucceeded())  
            {  
                // 获取图像数据  
                cout <<"SizeX: "<<ptrGrabResult->GetWidth()<<endl;  
                cout <<"SizeY: "<<ptrGrabResult->GetHeight()<<endl;  

                //将抓取的缓冲数据转化成pylon image.  
                formatConverter.Convert(pylonImage, ptrGrabResult);  

                // 将 pylon image转成OpenCV image.  
                openCvImage = cv::Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *) pylonImage.GetBuffer());  


                gettimeofday(&start,NULL);
                cvtColor(openCvImage,dst_image,CV_BGR2YUV_I420);

                gettimeofday(&end,NULL);

                pthread_mutex_lock(&mutex);
                g_buffer = dst_image.data;
                g_iReadBufferPos = 0;
                pthread_mutex_unlock(&mutex);

                sem_post(&sem_dec_output_read);
                sem_wait(&sem_dec_output_write);
                timeusd=start.tv_sec-end.tv_sec+(start.tv_usec-end.tv_usec)/1000000;

                //imshow("test",dst_image);
                //如果需要保存图片  
                if (saveImages)  
                {  
                   std::ostringstream s;              
                    // 按索引定义文件名存储图片  
                   s << "image_" << grabbedlmages << ".jpg";  
                   std::string imageName(s.str());  
                    //保存OpenCV image.  
                   cv::imwrite(imageName, openCvImage);  
                   grabbedlmages++;  
                }  

                //如果需要记录视频  
                if (recordVideo)  
                {  
                    cvVideoCreator.write(openCvImage);  
                }  

                //新建OpenCV display window.  
                //cv::namedWindow("OpenCV Display Window", CV_WINDOW_NORMAL); // other options: CV_AUTOSIZE, CV_FREERATIO  
                //显示及时影像.  
                //cv::imshow("OpenCV Display Window", dst_image);  

                // Define a timeout for customer's input in  
                // '0' means indefinite, i.e. the next image will be displayed after closing the window.  
                // '1' means live stream  
                waitKey(1);  

            }  

        }  

    }  
    catch (GenICam::GenericException &e)  
    {  
        // Error handling.  
        cerr << "An exception occurred." << endl  
            << e.GetDescription() << endl;  
    }  
    return exitCode;  
}  



版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/maxhn0/article/details/80101577

智能推荐

智能超表面,6G时代的颠覆式技术揭秘_智能超表面基带算法-程序员宅基地

文章浏览阅读699次。“一个理智的人,应该改变自己去适应环境。只有那些不理智的人,才会想去改变环境适应自己,但历史是后一种人创造的。”—— 萧伯纳在无线通信领域,也有一种不理智的技术,硬是以“虽千万人吾独往矣”的勇气,改变环境适应自己,硬是从一片荆棘中开辟出一条大道来。下面,就让我们对这种技术一探究竟。为什么要改变环境 ?在无线通信的发展史上,我们一直重点关注的是通信的主体,也就是信源和信宿。迄今为止,所有技术的技术革新,都是在增强信源和信宿的能力上下功夫。然而,对于信源和信宿之间的无线传播环境,会使信号经历复杂的反_智能超表面基带算法

Leetcode 436. Find Right Interval 找区间 解题报告_leetcode 436 寻找最右区间 java-程序员宅基地

文章浏览阅读3k次。1 解题思想题目给了一堆[起始位置,结束位置]的数组,定义了一个个区间任务则是要求对于给定的第I个区间,找到一个最小的j,这里的j的起始位置大于等于I的结束为止其实暴力一点可以直接搜索,但是这里还不需要这里使用了Java中的TreeMap 首先将所有起始位置和他的序号放入TreeMap(key是位置I的起始位置,value是I)当中随后遍历每个位置的结束为止,使用TreeMap的方法,使用当前序号_leetcode 436 寻找最右区间 java

【windows】win10/win11 更改window文件夹下的中文用户名 C:\Users\用户名\_win10 更改用户名 文件夹-程序员宅基地

文章浏览阅读2.5w次,点赞78次,收藏321次。在Profilelist下的文件夹对应系统中用户,而其中某个文件夹(一般是最后一个或倒数第二个)中ProfileImagePath值是指向每个用户文件夹的地址,一个个点击查看,找到。点进去以后,发现桌面图标都没了,不要慌。因为已经把文件夹名字改了,那要继续改注册表,把名字改成 新英文名字。在最开始新电脑设置用户名称的时候,为了方便,就把用户名设置成自己的中文名字了。改ProfileImagePath的值,将地址改为修改成英文的文件夹名。8.成功后,注销Administrator账户,重新登录到自己的账户。_win10 更改用户名 文件夹

sqoop连接mysql出现 Communications link failure_sqoop mysql communications link failure-程序员宅基地

文章浏览阅读2.7k次。错误Error: java.lang.RuntimeException: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failureThe last packet sent successfully to the server was_sqoop mysql communications link failure

SAP应用界面开发:SELECT-OPTIONS对象、PARAMETERS 对象_sap select-options-程序员宅基地

文章浏览阅读1.7k次。1.SELECT-OPTIONS基本语法及定义 SELECT-OPTIONS通常用于参照一数据库字段来建立数据输入域,其定义对象命名长度不能超过8位,其产生的屏幕对象最大输入长度为18位,语法如下:SELECT-OPTIONS <sel> FOR <f>.[For Example] SELECT-OPTIONS:DATAFORSY-DATUM. *运行界面如下:SELECT-OPTIONS内表结构:名称 ..._sap select-options

艾伦·图灵——如谜的解谜者-程序员宅基地

文章浏览阅读245次。文/苏椰2012年6月29日,是英国数学家艾伦·图灵100周年诞辰。他24岁发明图灵机模型,奠定了现代计算机的理论基础,被誉为计算机科学之父。二战期间,图灵秘密地作为英国情报界的核心人物,破译了德军的谜机密码,扭转了整个大西洋战局。战后,图灵提出了“机器能思考吗”的哲学思辨,先驱性地开创了人工智能的先河。但不幸的是,图灵因为同性恋身份,遭到迫害,以致被化学阉割。1954年,图灵中毒身亡,一代科..._艾伦·图灵设计的“图灵机”只是一个模型,并没有相应的实体产品。 ()

随便推点

android 修改ip的scope的作用,闲谈IPv6-IPv6地址的scope到底是什么?-程序员宅基地

文章浏览阅读427次。闲谈IPv6-IPv6地址的scope到底是什么?发布时间:2019-03-24 18:03,浏览次数:1605, 标签:IPvscope周日,大早上六点多和疯子去菜市场买了菜,顺便打了一壶糯米烧酒,回来把我的正则安哥哄睡了之后,继续思考IPv6的细节。一台主机启动后,每一块网卡都会自动生成一个fe80打头的 链路本地地址,这个地址在Linux上你删都删不掉,不信你试试,在Windows是可以删掉..._android ip -6 address show scope global

ajax请求node服务器基本操作_发送get方式的ajax请求 node-程序员宅基地

文章浏览阅读434次。使用ajax发送get请求 var xhr = new XMLHttpRequest() xhr.open('get', 'http://localhost:3000/first') xhr.send() xhr.onload = function() { console.log(xhr.responseText) ..._发送get方式的ajax请求 node

UEFI启动U盘制作_制作uefi启动u盘-程序员宅基地

文章浏览阅读2.3w次,点赞7次,收藏39次。说明在网上搜索了一下UEFI启动U盘工具,发现都是一些超级大的工具,动不动就是上百兆。而且之前使用老毛桃安装系统,发现会在系统中安装很多其他的软件,心有余悸。所以打算找一个干净的工具。随后在网上搜索找到rufus。使用下载rufus:https://rufus.ie/;建议下载Portable版本就可以了。运行,选择自己的设备,展开高级选项,选择引导类型“UEFI:NTFS”,然后点击“..._制作uefi启动u盘

常用网址收藏_关于电子电路的网站-程序员宅基地

文章浏览阅读1.6k次。常用网址收藏_关于电子电路的网站

易到高额奖励政策的背后:不差钱了?-程序员宅基地

文章浏览阅读55次。近日,易到在上海开始奖励车主了, 据说车主每天最高可获得120元的奖励金。这个奖励额度是前所未有的,易到突然推出高额奖励政策,是在释放一个什么信号?不差钱了?如果是不差钱了,那么钱从哪里来?是韬蕴资本又注入资金了,还是引进了新的投资方?无论何种情况,易到这一政策都对外释放了一个积极信号,有网友评论称,易到这次表现的很真诚!易到这次的奖励金政策首先在上海实施,并将在全国各业务城市逐步推开...

linux中mkswap命令使用方法,mkswap命令_Linux mkswap 命令用法详解:建立和设置SWAP交换分区...-程序员宅基地

文章浏览阅读477次。mkswap命令用于在一个文件或者设备上建立交换分区。在建立完之后要使用sawpon命令开始使用这个交换区。最后一个选择性参数指定了交换区的大小,但是这个参数是为了向后兼容设置的,没有使用的必要,一般都将整个文件或者设备作为交换区。语法mkswap(选项)(参数)选项-c:建立交换区前,先检查是否有损坏的区块;-f:在SPARC电脑上建立交换区时,要加上此参数;-v0:建立旧式交换区,此为预设值;..._mkswap: warning: truncating swap area to 17179869184 kib

推荐文章

热门文章

相关标签