Monday, March 12, 2012

Video to Texture Streaming (Part 2) - i.MX6 processor

After some months without any new stuff (sorry for that) here I am again ! continuing with the last post about video to texture, but now, using Freescale´s lastest release, the powerful i.MX6 processor. It is a quad core processor, that runs up to 1.2GHz, but I´m using it at 1GHz.

In that last application, it was used a webcam as our video source, but, what if we wanted to use a video file as a source for an application like that? The answer for this question is simple: we need a video decoder.

For video decoding, it can be done both ways:

1 - Software decoding
2 - Hardware decoding

as the processor used supports hardware decoding, and in the Freescale´s BSP it has already included the VPU plugins/drivers so we can use it in a Gstreamer code directly, which make a lot of things easier.

As an example I made a simple demo application that decodes a video and use it as a texture for a cube, but since we are dealing with a quad core here, why not using 2 videos decoding and 2 cubes to test its performance ?

before showing the result, lets first go through the gstreamer code.
void gst_play (const char *uri, GCallback handoffHandlerFunc)
{
GstElement* pFfConv = NULL;
GstElement* pSinkBin = NULL;
GstPad* pFfConvSinkPad = NULL;
GstPad* pSinkPad = NULL;

netplayer_gst_stop ();

pipeline = gst_pipeline_new ("gst-player");

bin = gst_element_factory_make ("playbin2", "bin");
videosink = gst_element_factory_make ("fakesink", "videosink");
//videosink = gst_element_factory_make ("mfw_v4lsink", "videosink");
g_object_set (G_OBJECT (videosink), "sync", TRUE, "signal-handoffs", TRUE, NULL);
g_signal_connect (videosink, "handoff", handoffHandlerFunc, NULL);

g_object_set (G_OBJECT (bin), "video-sink", videosink, NULL);
g_object_set (G_OBJECT (bin), "volume", 0.5, NULL);

bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
gst_bus_add_watch (bus, bus_call, loop);
gst_object_unref (bus);
g_object_set (G_OBJECT (bin), "uri", uri, NULL);

// colorspace conversion
// it is add in a new bin, and then this bin is added to the first one (above)
pFfConv = gst_element_factory_make ("ffmpegcolorspace", "ffconv");
if (!pFfConv)
{
printf("Couldn't create pFfConv\n");
}

//Put the fake sink and caps fiter into a single bin
pSinkBin = gst_bin_new("SinkBin");
if (!pSinkBin)
{
printf("Couldn't create pSinkBin\n");
}
gst_bin_add_many (GST_BIN (pSinkBin), pFfConv, videosink, NULL);
gst_element_link_filtered (pFfConv, videosink, gst_caps_new_simple ("video/x-raw-rgb","bpp",G_TYPE_INT,16, NULL));

//In order to link the sink bin to the play been we have to create
//a ghost pad that points to the capsfilter sink pad
pFfConvSinkPad = gst_element_get_static_pad(pFfConv, "sink");
if (!pFfConvSinkPad)
{
printf("Couldn't create pFfCovSinkPad\n");
}

pSinkPad = gst_ghost_pad_new( "sink", pFfConvSinkPad );
if (!pSinkPad)
{
printf("Couldn't create pSinkPad\n");
}
gst_element_add_pad(pSinkBin, pSinkPad);
gst_object_unref( pFfConvSinkPad);

// force the SinkBin to be used as the video sink
g_object_set (G_OBJECT (bin), "video-sink", pSinkBin, NULL);

gst_bin_add (GST_BIN (pipeline), bin);

gst_element_set_state (pipeline, GST_STATE_PAUSED);

return;
}

the code above creates the Gstreamer pipeline which is used for video decoding, note that we could see the video directly on the framebuffer if we use the mfw_v4lsink instead of fakesink argument.

fakesink is needed since we are not going to show the video in a video buffer, actually we need to put all the video date in another buffer and then use this buffer as texture for our cubes.

this buffer is updated by the callback function:

//handoff function, called every frame
void on_handoff (GstElement* pFakeSink, GstBuffer* pBuffer, GstPad* pPad)
{

video_w = get_pad_width (pPad);
video_h = get_pad_height (pPad);

gst_buffer_ref (pBuffer);
memmove (g_pcFrameBuffer, GST_BUFFER_DATA (pBuffer), video_w * video_h * 2);
gst_buffer_unref (pBuffer);
}

and as for every gstreamer based application we need a main loop for the message bus, and our main loop is already being used for rendering, we can use a thread for it:


void *GstLoop (void *ptr)
{
while (1)
{
while ((bus_msg = gst_bus_pop (bus)))
{
// Call your bus message handler
bus_call (bus, bus_msg, NULL);
gst_message_unref (bus_msg);
}
}
}


the bus_call function is a generic message_bus function that can be easily found in any gstreamer documentation, or, you can access: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-bus.html

with those functions, your main function could look like:


g_pcFrameBuffer = (gchar*)malloc(720*480*3); // video buffer
gst_init (&argc, &argv);
loop = g_main_loop_new (NULL, FALSE);
uri_to_play = g_strdup_printf ("file:///home/video_to_play.mp4");
gst_play(uri_to_play, (GCallback) on_handoff);
gst_resume();
pthread_create (&gst_loop_thread, NULL, GstLoop,(void *)&thread_id);


after that, we have the g_pcFrameBuffer variable being updated constantly, and then we can use it as a texture on the the cube´s face:


glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, w, h, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, g_pcFrameBuffer);


in this application we are not using the gpu buffers directly, like in the last post, we are going to get back on it in the future.

and finally, the result:



EOF !

2 comments: