Difference between revisions of "Streaming Video With RaspberryPi"

From Tmplab
(B. Start capture in a screen)
 
(31 intermediate revisions by 2 users not shown)
Line 29: Line 29:
 
raspivid is the basic command line used to capture video in h264.
 
raspivid is the basic command line used to capture video in h264.
  
<code>raspivid -t 3 -fps 25 -b 1000k -w 1920 -h 1080 -o /tmp/video.h264</code>
+
<code>raspivid -t 3 -fps 25 -b 1000000 -w 1920 -h 1080 -o /tmp/video.h264</code>
  
 
A very simple tutorial : http://www.raspberrypi-spy.co.uk/2013/05/capturing-hd-video-with-the-pi-camera-module/
 
A very simple tutorial : http://www.raspberrypi-spy.co.uk/2013/05/capturing-hd-video-with-the-pi-camera-module/
 +
 +
Note: when using 1920x1080, the raspicam take those pixel at the center of the captor. When using smaller definitions, it's using all the 5mpixels and extrapoling down to your requested definition. As a result, a 1280x720 video looks "unzoomed" compared to a 1920x1080, the latter having more grain in the picture too. '''tl;dr: use 1280x720 maximum''' ;)
 +
 
= Solution =
 
= Solution =
  
Line 40: Line 43:
  
 
# Use the PI to capture video as h264, merge audio from usb and use ffmpeg to produce MPEGTS "chunks"
 
# Use the PI to capture video as h264, merge audio from usb and use ffmpeg to produce MPEGTS "chunks"
# Rsync the chunks to a laptop or a server (note : the audio mix can be done on the laptop)  
+
# Rsync the chunks to a laptop or a server (note : the audio mix should be integrated here to ensure a good audio/video synchronization)
 
# Assemble the chunks and pipe them in ffmpeg
 
# Assemble the chunks and pipe them in ffmpeg
 
# Ask ffmpeg to convert this into ogg
 
# Ask ffmpeg to convert this into ogg
Line 59: Line 62:
  
 
     <code>[ -d /tmp/capture ] || mkdir /tmp/capture; rm -f /tmp/capture/* && cd /tmp/capture/ && \
 
     <code>[ -d /tmp/capture ] || mkdir /tmp/capture; rm -f /tmp/capture/* && cd /tmp/capture/ && \
     raspivid -ih -t 0 -w 1280 -h 720 -b 1000000 -pf baseline -o - | /usr/local/bin/ffmpeg  -f alsa -itsoffset 6.5 -ac 1 -i hw:1 -acodec aac -strict -2 \
+
     raspivid -ih -t 0 -w 1280 -h 720 -b 1000000 -pf baseline -o - | /usr/local/bin/ffmpeg  -i hw:1 -itsoffset 6.5 -ac 1 -acodec aac -strict -2 -f alsa \
 
     -i - -vcodec copy -f segment -segment_list out.list -segment_list_flags +live -segment_list_size 5 -segment_time 4 -segment_time_delta 3 %10d.ts</code>
 
     -i - -vcodec copy -f segment -segment_list out.list -segment_list_flags +live -segment_list_size 5 -segment_time 4 -segment_time_delta 3 %10d.ts</code>
  
 
'''What's happening here'''
 
'''What's happening here'''
  
# We create a /tmp/capture folder and make sure it's empty when starting capture
+
#<code>[ -d /tmp/capture ] || mkdir /tmp/capture; rm -f /tmp/capture/* && cd /tmp/capture/</code> We create a /tmp/capture folder and make sure it's empty when starting capture in it
# Raspivid starts capturing with following parameters
+
#<code> raspivid </code> Use raspivid to capturing with following parameters:
 
##<code> -ih </code>(inline headers) '''DONT CHANGE''' Necessary for technical reasons, as otherwise the "chunking" doesn't work
 
##<code> -ih </code>(inline headers) '''DONT CHANGE''' Necessary for technical reasons, as otherwise the "chunking" doesn't work
 
##<code> -t 0 </code> (timeout) '''DONT CHANGE''' Necessary for technical reasons, as otherwise capture stops after 5s
 
##<code> -t 0 </code> (timeout) '''DONT CHANGE''' Necessary for technical reasons, as otherwise capture stops after 5s
Line 72: Line 75:
 
##<code> -pf baseline </code> (h264 profile) Tweak according to your needs ( only baseline, main, or high accepted)  
 
##<code> -pf baseline </code> (h264 profile) Tweak according to your needs ( only baseline, main, or high accepted)  
 
##<code> -o - </code> (output) '''DONT CHANGE''' Necessary in order to use the flux as Standard Output
 
##<code> -o - </code> (output) '''DONT CHANGE''' Necessary in order to use the flux as Standard Output
# We pipe the content into ffmpeg
+
# We pipe the content into ffmpeg with following parameters:
 
## ALSA Input
 
## ALSA Input
###<code> -f alsa </code> (format) We use <code>alsa</code> for usb audio capture
 
 
###<code> -itsoffset 6.5 </code> (time offset) '''This one is a trick''' We noticed our RPi B+ had a 6.5 seconds delay to start the audio, so this is used to resync audio. Tweak.  
 
###<code> -itsoffset 6.5 </code> (time offset) '''This one is a trick''' We noticed our RPi B+ had a 6.5 seconds delay to start the audio, so this is used to resync audio. Tweak.  
 
###<code> -ac 1 </code> (number of audio channels) We used a mono input, so <code>1</code> was the right choice. Tweak
 
###<code> -ac 1 </code> (number of audio channels) We used a mono input, so <code>1</code> was the right choice. Tweak
Line 81: Line 83:
 
###<code> -strict -2 </code> Argument mandatory for AAC format
 
###<code> -strict -2 </code> Argument mandatory for AAC format
 
## Video Input
 
## Video Input
 +
###<code> -f alsa </code> (format) We use <code>alsa</code> for usb audio capture
 
###<code> -i - </code> (input) '''DONT CHANGE''' Use the Standard input
 
###<code> -i - </code> (input) '''DONT CHANGE''' Use the Standard input
 
###<code> -vcodec copy</code> (video codec) '''DONT CHANGE''' Use the video codec from the RPi. Not enough CPU to do anything else.  
 
###<code> -vcodec copy</code> (video codec) '''DONT CHANGE''' Use the video codec from the RPi. Not enough CPU to do anything else.  
Line 96: Line 99:
 
'''Some important points to mention here '''
 
'''Some important points to mention here '''
  
* The RPi MUST have access to a <server> using an SSH KEY for an <user>. Password access won't work for infinite rsync.
+
* The RaspberryPi '''MUST''' have access to a <server> using an SSH KEY for an <user>. Password access won't work for infinite rsync.
* This <server> CAN be your laptop. If so it MUST be on the same LAN as the RaspberryPi  
+
* This <server> '''CAN''' be your laptop. If so it MUST be on the same LAN as the RaspberryPi  
* This <server> CAN be a datacenter machie. If so it MUST be accessible on Internet by the RaspberryPi.
+
* This <server> '''CAN''' be a datacenter machie. If so it MUST be accessible on Internet by the RaspberryPi.
* This <server> MUST have FFMPEG installed (see point D below)
+
* This <server> '''MUST''' have FFMPEG installed (see point D below)
 
* It is advised to run this one liner in a <code>screen</code> command on the RaspberryPi  
 
* It is advised to run this one liner in a <code>screen</code> command on the RaspberryPi  
  
     <code> ssh user@server:/tmp/ "[ -d /tmp/capture ] || mkdir /tmp/capture" && while true; do rsync -a --files-from=/tmp/capture/out.list /tmp/capture user@server:/tmp/capture; sleep 1; done</code>
+
     <code> ssh <user>@<server> "[ -d /tmp/capture ] || mkdir /tmp/capture" && \
 +
    while true; do rsync -a --files-from=/tmp/capture/out.list /tmp/capture <user>@<server>:/tmp/capture; sleep 1; done</code>
  
  
Line 108: Line 112:
  
 
#<code> ssh </code> Use SSH ...
 
#<code> ssh </code> Use SSH ...
##<code> user@server:/tmp/ </code> ... to connect to server "server" as user "user"
+
##<code> <user>@<server</code> ... to connect to server "server" as user "user"
 
##<code> "[ -d /tmp/capture ] || mkdir /tmp/capture" </code> ... and create if not exists a folder "/tmp/capture"
 
##<code> "[ -d /tmp/capture ] || mkdir /tmp/capture" </code> ... and create if not exists a folder "/tmp/capture"
 
#<code> while true; do </code> Run an infinite loop
 
#<code> while true; do </code> Run an infinite loop
Line 115: Line 119:
 
###<code> --files-from=/tmp/capture/out.list </code> Use the out.list as a list of file to transfer, which avoids scanning the whole folder
 
###<code> --files-from=/tmp/capture/out.list </code> Use the out.list as a list of file to transfer, which avoids scanning the whole folder
 
###<code> /tmp/capture </code> (source) Transfer local folder content...
 
###<code> /tmp/capture </code> (source) Transfer local folder content...
###<code> user@server:/tmp/capture; </code> (destination) To the server "server"  
+
###<code> <user>@<server>:/tmp/capture; </code> (destination) To the server "server"  
 
##<code> sleep 1; </code> Sleep one second
 
##<code> sleep 1; </code> Sleep one second
#<code> done </code> Run loop again
+
#<code> done </code> Loop end
 +
 
 +
==== D. Broadcast from server to icecast ====
  
'''D. Broadcast from server to icecast '''
 
  
This requires
+
* You '''MUST''' install some script on <server> to assemble / concatenate the MPEGTS chunks for you.
# a PHP streamer for your incoming files : <code>http://pastebin.com/3f5t9vDS</code>
+
    This PHP streamer is made for that: <code>https://raw.githubusercontent.com/albancrommer/raspistream/master/stream.php</code>
# the oggfwd command line tool: <code>aptitude install oggfwd</code>
+
* You '''MUST''' install ffmpeg on <server> with ogg support (see below)
 +
* You '''MUST''' install the oggfwd command line tool with <code>aptitude install oggfwd</code>
 +
* You '''MUST''' have access to an icecast server. If you use a datacenter server, everything can run locally
  
     <code> php /usr/local/bin/stream.php | ffmpeg -i - -f ogg - | oggfwd -p -n "Test" stream.server.com 8000 mySecretIceCastStreamingPassword /test </code>
+
     <code> php /usr/local/bin/stream.php | ffmpeg -i - -r 12 -s 640x360 -vb 1000k -f ogg - | oggfwd -p -n "My RaspberryPi Stream" <stream.server.com> 8000 mySecretIceCastStreamingPassword /test </code>
  
'''Sources'''
+
'''What's happening here'''
 +
 
 +
#<code> php /usr/local/bin/stream.php </code> Start an infinite stream of assembled chunks received via rsync
 +
#<code> | ffmpeg </code> Pipe into FFMPEG
 +
##<code> -i - </code> (input) '''DON'T CHANGE''' Use Standard In as input
 +
##<code> -r 12</code> number of images per second (recommended: low values for live streaming)
 +
##<code> -s 640x360</code> width and height of the video. You need to keep the same ratio but 640x360 is good for low-bandwidth live streaming
 +
##<code> -vb 1000k </code> video bitrate in bps. use 400 for ~512Kbps video streaming
 +
##<code> -f ogg </code> (format) '''DON'T CHANGE''' Use ogg as output format
 +
##<code> - </code> (output) '''DON'T CHANGE''' Output to Standard Out
 +
#<code> | oggfwd  </code> Pipe into oggfwd
 +
##<code> -p  </code> (public) Makes the stream public. Tweak
 +
##<code> -n "My RaspberryPi Stream" </code> (name) Your stream name. Adapt
 +
##<code> <stream.server.com> </code> (address) Your icecast server name. Adapt
 +
##<code> 8000 </code> (port) 8000 is default for icecast. Adapt
 +
##<code> <mySecretIceCastStreamingPassword> </code> (password) The icecast input password Adapt
 +
##<code> /rpi01 </code> (mountpoint) The icecast "mountpoint" ie. the path for your stream
 +
 
 +
==== E. Get the m3u from icecast ====
 +
 
 +
With the default parameters provided the stream would be accessed on
 +
 
 +
    <code>http://<stream.server.com>:8000/rpi01.m3u</code>
 +
 
 +
==== Sources ====
  
 
http://sirlagz.net/2013/01/07/how-to-stream-a-webcam-from-the-raspberry-pi-part-3/
 
http://sirlagz.net/2013/01/07/how-to-stream-a-webcam-from-the-raspberry-pi-part-3/
  
== Solution 2 : FLVSTR + PHP Streamer ==
+
=== How to get full video from the small chunks ===
 +
 
 +
After the streaming you should have chunks both on the RaspberryPi and the server, and could perform the conversion on any of them.
 +
 
 +
Except that the RaspberryPi is VERY slow and that depending on your budget / stability needs you might not have kept all the chunks on the RaspberryPi.
 +
 
 +
In other words, make the conversion on the server, be it your laptop or a datacenter server.
 +
 
 +
==== A. Clean the last file (optional) ====
 +
 
 +
As our last chunk / fragment might be invalid, it's safer to remove it using :
 +
 
 +
    <code>ls /tmp/capture/*ts|tail -n 1|xargs rm </code>
 +
 
 +
 
 +
'''What's happening here'''
 +
 
 +
This command retrieves a sorted list of all chunks in the capture folder, extracts the last one and deletes it.
 +
 
 +
 
 +
==== B. Convert to single file (mp4, webm) ====
 +
 
 +
It is assumed you have FFMPEG installed on the machine.
 +
 
 +
It is assumed you want to make minimal changes to your original video input (size, bitrate, etc). Only essential options are provided but you can add more according to your needs, double pass conversion is not included either.
 +
 
 +
It is recommanded to use a script for files merging, as ffmpeg syntax can be a bit of a mess for that, with little option if you want to use start or end file.
 +
 
 +
    This PHP script is made for that : https://raw.githubusercontent.com/albancrommer/raspistream/master/concat.php
 +
 
 +
 +
 +
''' Converting to MP4 '''
 +
 
 +
This operation can be fast as the MPEGTS chunks are ready for MP4
 +
 
 +
    <code>php concat.php <start> <end> | ffmpeg -i - -movflags +faststart -threads 0 -profile:v high -preset slow <myfile>.mp4</code>
 +
 
 +
 
 +
'''What's happening here'''
 +
 +
 
 +
#<code> php concat.php </code> Start concatenation
 +
##<code> <start> </code> (optional) an integer designing the first file to include
 +
##<code> <end> </code>(optional) an integer designing the last file to include
 +
#<code> | ffmpeg </code> Pipe into FFMPEG with following parameters
 +
##<code> -i - </code> (input)  '''DON'T CHANGE''' use stdin as input
 +
##<code> -movflags +faststart </code> '''DON'T CHANGE'''  Make the file ready for web viewing
 +
##<code> -threads 0 </code> Require all CPU to work on the conversion. Tweak.
 +
##<code> -profile:v high </code> Set the output quality. Tweak.
 +
##<code> -preset slow </code> Set the encoding speed.Tweak.
 +
##<code> <myfile>.mp4 </code> Your output file name. Adapt.
 +
 
 +
 
 +
''' Converting to WEBM '''
 +
 
 +
This operation will be slower as audio and video tracks needs to use new codecs
 +
 
 +
    <code>php concat.php <start> <end> | ffmpeg -i - -codec:a libvorbis -codec:v libvpx -threads 0 -quality good -cpu-used 0 -qmin 10 -qmax 42 <myfile>.webm</code>
 +
 
 +
 
 +
'''What's happening here'''
 +
 +
 
 +
#<code> php concat.php </code> Start concatenation
 +
##<code> <start> </code> (optional) an integer designing the first file to include
 +
##<code> <end> </code>(optional) an integer designing the last file to include
 +
#<code> | ffmpeg </code> Pipe into FFMPEG with following parameters
 +
##<code> -i - </code> (input) '''DON'T CHANGE''' use stdin as input
 +
##<code> -codec:a libvorbis </code> (codec) '''DON'T CHANGE''' Define the audio codec
 +
##<code> -codec:v libvpx </code> (codec) '''DON'T CHANGE''' Define the video codec
 +
##<code> -threads 0 </code> Require all CPU to work on the conversion. Tweak.
 +
##<code> -quality good </code> Set the encoding speed. Tweak
 +
##<code> -cpu-used 1 </code> Set the encoding speed. Tweak
 +
##<code> -qmin 10 -qmax 42 </code> Set the encoding quality. Tweak
 +
##<code> <myfile>.webm </code> Your output file name. Adapt
 +
 
 +
==== Sources ====
  
 +
https://www.virag.si/2012/01/web-video-encoding-tutorial-with-ffmpeg-0-9/
 +
https://www.virag.si/2012/01/webm-web-video-encoding-tutorial-with-ffmpeg-0-9/
  
'''Basic idea''' the Octopuce company has a solution to convert live MP4 to F4V. With an USB audio card, we could mux the MP4 and AAC audio and have a standalone solution.
+
== Solution 2 : FLVSTR + PHP Streamer ==
  
'''CON''' authentification is hard, F4V means Flash, requires an USB disk for local backup
 
  
'''PRO''' the pi can be autonomous
+
'''Basic idea''' Octopuce company has a solution to convert live MP4 to F4V. With an USB audio card, we could mux the MP4 and AAC audio and have a standalone solution not needing any laptop.
  
First, authentification. This problem is adressed by solving encryption as well: we use an SSL socket to communicate with the server.
+
'''CON''' authentication not included as of now (only IP address), F4V means Flash, requires an USB disk for local backup. Also, the live stream will need to have the same bitrate as the recorded one.
  
 +
'''PRO''' the pi can be autonomous (no need for a laptop to encode the live stream)
  
 +
First, authentication. This problem is adressed by solving encryption as well: we use an SSL socket to communicate with the server. (we could use rsync server mode too).
  
 
== Solution 3 : RTSP ==
 
== Solution 3 : RTSP ==
Line 176: Line 287:
  
  
2. Run a capture : <code>raspivid -t 0 -b 1000000 -w 1080 -h 720 -v -o - | ffmpeg -i - -f alsa -ac 1 -itsoffset 6.5 -i hw:1 -acodec aac -strict -2 -vcodec copy out.m3u8 </code>
+
2. Run a capture : <code>raspivid -ih -pf baseline -t 0 -b 1000000 -w 1280 -h 720 -v -o - | ffmpeg -i - -f alsa -ac 1 -itsoffset 6.5 -i hw:1 -acodec aac -strict -2 -vcodec copy out.m3u8 </code>
  
 
3. Run a cron rsync to server (todo)
 
3. Run a cron rsync to server (todo)
Line 198: Line 309:
 
== For Raspberry ==
 
== For Raspberry ==
  
For the Raspberry, we only need the support of x264 and ALSA
+
For the Raspberry, we only need the support of h264, AAC and ALSA
 
<pre>
 
<pre>
 
sudo -s
 
sudo -s
aptitude install screen yasl libx264-dev libasound2-dev ffmpeg
+
aptitude install screen yasl libx264-dev libasound2-dev libfdk-aac-dev ffmpeg
 
cd /usr/src  
 
cd /usr/src  
 
git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git  
 
git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git  
 
cd ffmpeg
 
cd ffmpeg
./configure --enable-gpl --enable-libx264
+
./configure --enable-nonfree --enable-gpl --enable-libx264 --enable-libfdk-aac
 
make
 
make
 
make install
 
make install
 
</pre>
 
</pre>
  
== For laptop or server ==
+
== Compiling FFMPEG for laptop or server ==
  
 
Your default debian might come with sufficent support but if you want total control, compiling is a good idea.
 
Your default debian might come with sufficent support but if you want total control, compiling is a good idea.
Line 222: Line 333:
 
<pre>  
 
<pre>  
 
sudo  
 
sudo  
aptitude update && aptitude install screen pkg-config yasm ffmpeg libass-dev libavcodec-extra libfdk-aac-dev libmp3lame-dev libopus-dev libtheora-dev libvorbis-dev libvpx-dev libx264-dev
+
aptitude update  
 +
aptitude install screen pkg-config yasm ffmpeg libass-dev libavcodec-extra libfdk-aac-dev libmp3lame-dev libopus-dev libtheora-dev libx11-dev libvorbis-dev libvpx-dev libx264-dev
 
cd /usr/src  
 
cd /usr/src  
 
git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git  
 
git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git  
 
cd ffmpeg
 
cd ffmpeg
./configure --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab
+
./configure --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab  
make  
+
make
 
make install
 
make install
 
</pre>
 
</pre>
Line 235: Line 347:
 
Here are a number of unsorted links
 
Here are a number of unsorted links
  
http://techzany.com/2013/09/live-streaming-video-using-avconv-and-a-raspberry-pi/
+
* http://techzany.com/2013/09/live-streaming-video-using-avconv-and-a-raspberry-pi/
 +
* http://blog.cloudfrancois.fr/category/streaming-video.html
  
  
https://trac.ffmpeg.org/wiki/StreamingGuide
+
== FFMPEG ==
 +
 
 +
* http://ffmpeg.org/ffmpeg-all.html#segment_002c-stream_005fsegment_002c-ssegment
 +
* https://trac.ffmpeg.org/wiki/StreamingGuide
  
  
 
== Node ==  
 
== Node ==  
  
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets
+
* http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets
https://github.com/fluent-ffmpeg/node-fluent-ffmpeg
+
* https://github.com/phoboslab/jsmpeg
 +
* https://github.com/fluent-ffmpeg/node-fluent-ffmpeg
 +
 
 +
== Raspberry PI ==
  
http://ffmpeg.org/ffmpeg-all.html#segment_002c-stream_005fsegment_002c-ssegment
+
* Raspbian, debian on Raspberry pi http://www.raspbian.org/
 +
* Chose your SD card for your PI : http://elinux.org/RPi_SD_cards

Latest revision as of 00:18, 6 March 2015

General

Caution: this is a Work in progress, things are being tested. The objective is to provide in the end one or more working solutions for everyone.

Video streaming is a problem

The RaspberryPi camera offers an interesting solution to this problem. It is a very well integrated module of the Pi with one huge advantage: h264 encoding can be performed directly by the CPU as the camera uses the Serial Camera Interface protocol.

So theorically, solving the video problem with the Pi is easy but there are many subtle problems.


Problems

Audio As we use video webstreaming mostly for conferences broadcasting, good audio quality is necessary.

Slides It would be interesting to include slides of conferences while filming.

File It is important to have a file at the end of the filming.

Web It is important to have a large viewer base, therefore a well supported format.


Raspicam basics

http://elinux.org/Rpi_Camera_Module

raspivid is the basic command line used to capture video in h264.

raspivid -t 3 -fps 25 -b 1000000 -w 1920 -h 1080 -o /tmp/video.h264

A very simple tutorial : http://www.raspberrypi-spy.co.uk/2013/05/capturing-hd-video-with-the-pi-camera-module/

Note: when using 1920x1080, the raspicam take those pixel at the center of the captor. When using smaller definitions, it's using all the 5mpixels and extrapoling down to your requested definition. As a result, a 1280x720 video looks "unzoomed" compared to a 1920x1080, the latter having more grain in the picture too. tl;dr: use 1280x720 maximum ;)

Solution

Solution 1 : OGG/VORBIS + Icecast

Basic idea

  1.  Use the PI to capture video as h264, merge audio from usb and use ffmpeg to produce MPEGTS "chunks"
  2. Rsync the chunks to a laptop or a server (note : the audio mix should be integrated here to ensure a good audio/video synchronization)
  3. Assemble the chunks and pipe them in ffmpeg
  4. Ask ffmpeg to convert this into ogg
  5. Use oggfwd to push the ogg to your icecast server
  6. Serve m3u from the server

CON ogg does not work for everyone. It is supposed to be HTML5 compatible but icecast doesn't offer that by default.

PRO Icecast is simple, open, and handles authentification. Rsync using SSH is crypto friendly. The file is saved on the server.

How to stream in OGG to Icecast

A. Compile FFMPEG on pi & server (see below)

B. Start capture in a screen

It is advised to run this one liner in a screen command on the RaspberryPi

   [ -d /tmp/capture ] || mkdir /tmp/capture; rm -f /tmp/capture/* && cd /tmp/capture/ && \
   raspivid -ih -t 0 -w 1280 -h 720 -b 1000000 -pf baseline -o - | /usr/local/bin/ffmpeg  -i hw:1 -itsoffset 6.5 -ac 1  -acodec aac -strict -2 -f alsa \
   -i - -vcodec copy -f segment -segment_list out.list -segment_list_flags +live -segment_list_size 5 -segment_time 4 -segment_time_delta 3 %10d.ts

What's happening here

  1. [ -d /tmp/capture ] || mkdir /tmp/capture; rm -f /tmp/capture/* && cd /tmp/capture/ We create a /tmp/capture folder and make sure it's empty when starting capture in it
  2. raspivid Use raspivid to capturing with following parameters:
    1.  -ih (inline headers) DONT CHANGE Necessary for technical reasons, as otherwise the "chunking" doesn't work
    2.  -t 0 (timeout) DONT CHANGE Necessary for technical reasons, as otherwise capture stops after 5s
    3.  -w 1080 -h 720 (height) and (width) Tweak according to your needs
    4.  -b 1000000 (bitrate) Tweak according to your needs (only integer numbers in bits are accepted, here <=> 1Mb)
    5.  -pf baseline (h264 profile) Tweak according to your needs ( only baseline, main, or high accepted)
    6.  -o - (output) DONT CHANGE Necessary in order to use the flux as Standard Output
  3. We pipe the content into ffmpeg with following parameters:
    1. ALSA Input
      1.  -itsoffset 6.5  (time offset) This one is a trick We noticed our RPi B+ had a 6.5 seconds delay to start the audio, so this is used to resync audio. Tweak.
      2.  -ac 1  (number of audio channels) We used a mono input, so 1 was the right choice. Tweak
      3. -i hw:1  (input) Tweak as your audio card adress may vary. Find more with arecord -l
      4. -acodec aac (audio codec) AAC works well for TS live.
      5.  -strict -2  Argument mandatory for AAC format
    2.  Video Input
      1. -f alsa (format) We use alsa for usb audio capture
      2.  -i -  (input) DONT CHANGE Use the Standard input
      3. -vcodec copy (video codec) DONT CHANGE Use the video codec from the RPi. Not enough CPU to do anything else.
      4. -f segment (output format) DONT CHANGE Use a "chunked" output
      5. -segment_list out.list (segment file) Defines a file containing the produced files names
      6. -segment_list_flags +live (segment file flags) Defines the way the output file caches files names.
      7. -segment_list_size 5 (segment file size)
      8. -segment_time 4 (segment time) Defines the capture base duration in seconds. Tweak.
      9. -segment_time_delta 3 (segment time delta) Defines a window to modulate chunks duration in seconds to include mandatory inline headers. Tweak.
      10.  %10d.ts  The format for chunks files names. %10d will start at 0000000000.ts and ffmpeg understand we want MPEGTS format for chunks
  4.  ffmpeg saves the files 0000.ts, 0001.ts, etc. and out.list in /tmp/capture

C. Use rsync to infinitely synchronise chunks on server

Some important points to mention here

  • The RaspberryPi MUST have access to a <server> using an SSH KEY for an <user>. Password access won't work for infinite rsync.
  • This <server> CAN be your laptop. If so it MUST be on the same LAN as the RaspberryPi
  • This <server> CAN be a datacenter machie. If so it MUST be accessible on Internet by the RaspberryPi.
  • This <server> MUST have FFMPEG installed (see point D below)
  • It is advised to run this one liner in a screen command on the RaspberryPi
    ssh <user>@<server> "[ -d /tmp/capture ] || mkdir /tmp/capture" && \
   while true; do rsync -a --files-from=/tmp/capture/out.list /tmp/capture <user>@<server>:/tmp/capture; sleep 1; done


What's happening here

  1. ssh Use SSH ...
    1.  <user>@<server>  ... to connect to server "server" as user "user"
    2.  "[ -d /tmp/capture ] || mkdir /tmp/capture" ... and create if not exists a folder "/tmp/capture"
  2. while true; do Run an infinite loop
    1. rsync Start rsync file synchronisation
      1. -a (archive mode) Set the right parameters for transfer
      2. --files-from=/tmp/capture/out.list Use the out.list as a list of file to transfer, which avoids scanning the whole folder
      3. /tmp/capture (source) Transfer local folder content...
      4. <user>@<server>:/tmp/capture; (destination) To the server "server"
    2. sleep 1; Sleep one second
  3. done Loop end

D. Broadcast from server to icecast

  •  You MUST install some script on <server> to assemble / concatenate the MPEGTS chunks for you.
   This PHP streamer is made for that: https://raw.githubusercontent.com/albancrommer/raspistream/master/stream.php
  •  You MUST install ffmpeg on <server> with ogg support (see below)
  • You MUST install the oggfwd command line tool with aptitude install oggfwd
  • You MUST have access to an icecast server. If you use a datacenter server, everything can run locally
    php /usr/local/bin/stream.php | ffmpeg -i - -r 12 -s 640x360 -vb 1000k -f ogg - | oggfwd -p -n "My RaspberryPi Stream" <stream.server.com> 8000 mySecretIceCastStreamingPassword /test 

What's happening here

  1. php /usr/local/bin/stream.php Start an infinite stream of assembled chunks received via rsync
  2. | ffmpeg Pipe into FFMPEG
    1. -i - (input) DON'T CHANGE Use Standard In as input
    2. -r 12 number of images per second (recommended: low values for live streaming)
    3. -s 640x360 width and height of the video. You need to keep the same ratio but 640x360 is good for low-bandwidth live streaming
    4. -vb 1000k video bitrate in bps. use 400 for ~512Kbps video streaming
    5. -f ogg (format) DON'T CHANGE Use ogg as output format
    6. - (output) DON'T CHANGE Output to Standard Out
  3. | oggfwd Pipe into oggfwd
    1. -p (public) Makes the stream public. Tweak
    2. -n "My RaspberryPi Stream" (name) Your stream name. Adapt
    3. <stream.server.com> (address) Your icecast server name. Adapt
    4. 8000 (port) 8000 is default for icecast. Adapt
    5. <mySecretIceCastStreamingPassword> (password) The icecast input password Adapt
    6. /rpi01 (mountpoint) The icecast "mountpoint" ie. the path for your stream

E. Get the m3u from icecast

With the default parameters provided the stream would be accessed on

   http://<stream.server.com>:8000/rpi01.m3u

Sources

http://sirlagz.net/2013/01/07/how-to-stream-a-webcam-from-the-raspberry-pi-part-3/

How to get full video from the small chunks

After the streaming you should have chunks both on the RaspberryPi and the server, and could perform the conversion on any of them.

Except that the RaspberryPi is VERY slow and that depending on your budget / stability needs you might not have kept all the chunks on the RaspberryPi.

In other words, make the conversion on the server, be it your laptop or a datacenter server.

A. Clean the last file (optional)

As our last chunk / fragment might be invalid, it's safer to remove it using :

   ls /tmp/capture/*ts|tail -n 1|xargs rm 


What's happening here

This command retrieves a sorted list of all chunks in the capture folder, extracts the last one and deletes it.


B. Convert to single file (mp4, webm)

It is assumed you have FFMPEG installed on the machine.

It is assumed you want to make minimal changes to your original video input (size, bitrate, etc). Only essential options are provided but you can add more according to your needs, double pass conversion is not included either.

It is recommanded to use a script for files merging, as ffmpeg syntax can be a bit of a mess for that, with little option if you want to use start or end file.

   This PHP script is made for that : https://raw.githubusercontent.com/albancrommer/raspistream/master/concat.php


Converting to MP4

This operation can be fast as the MPEGTS chunks are ready for MP4

   php concat.php <start> <end> | ffmpeg -i - -movflags +faststart -threads 0 -profile:v high -preset slow <myfile>.mp4


What's happening here


  1.  php concat.php Start concatenation
    1. <start>  (optional) an integer designing the first file to include
    2. <end> (optional) an integer designing the last file to include
  2. | ffmpeg Pipe into FFMPEG with following parameters
    1. -i - (input) DON'T CHANGE use stdin as input
    2. -movflags +faststart DON'T CHANGE Make the file ready for web viewing
    3. -threads 0 Require all CPU to work on the conversion. Tweak.
    4. -profile:v high Set the output quality. Tweak.
    5. -preset slow Set the encoding speed.Tweak.
    6. <myfile>.mp4 Your output file name. Adapt.


Converting to WEBM

This operation will be slower as audio and video tracks needs to use new codecs

   php concat.php <start> <end> | ffmpeg -i - -codec:a libvorbis -codec:v libvpx -threads 0 -quality good -cpu-used 0 -qmin 10 -qmax 42 <myfile>.webm


What's happening here


  1. php concat.php Start concatenation
    1. <start> (optional) an integer designing the first file to include
    2. <end> (optional) an integer designing the last file to include
  2. | ffmpeg Pipe into FFMPEG with following parameters
    1. -i - (input) DON'T CHANGE use stdin as input
    2. -codec:a libvorbis (codec) DON'T CHANGE Define the audio codec
    3. -codec:v libvpx (codec) DON'T CHANGE Define the video codec
    4. -threads 0 Require all CPU to work on the conversion. Tweak.
    5. -quality good Set the encoding speed. Tweak
    6. -cpu-used 1 Set the encoding speed. Tweak
    7. -qmin 10 -qmax 42 Set the encoding quality. Tweak
    8. <myfile>.webm Your output file name. Adapt

Sources

https://www.virag.si/2012/01/web-video-encoding-tutorial-with-ffmpeg-0-9/ https://www.virag.si/2012/01/webm-web-video-encoding-tutorial-with-ffmpeg-0-9/

Solution 2 : FLVSTR + PHP Streamer

Basic idea Octopuce company has a solution to convert live MP4 to F4V. With an USB audio card, we could mux the MP4 and AAC audio and have a standalone solution not needing any laptop.

CON authentication not included as of now (only IP address), F4V means Flash, requires an USB disk for local backup. Also, the live stream will need to have the same bitrate as the recorded one.

PRO the pi can be autonomous (no need for a laptop to encode the live stream)

First, authentication. This problem is adressed by solving encryption as well: we use an SSL socket to communicate with the server. (we could use rsync server mode too).

Solution 3 : RTSP

Basic idea Use an RTSP stream with VLC and the V4L driver

CON Non commercial RTSP server are not the norm, requires VLC or Flash player, Quality with v4l is low

PRO Easy to work out

Sources

http://www.ics.com/blog/raspberry-pi-camera-module#.VJFhbyvF-b8

http://raspberrypi.stackexchange.com/questions/23182/how-to-stream-video-from-raspberry-pi-camera-and-watch-it-live

http://ffmpeg.gusari.org/viewtopic.php?f=16&t=1130

http://blog.tkjelectronics.dk/2013/06/how-to-stream-video-and-audio-from-a-raspberry-pi-with-no-latency/

Solution 4 : HLS + RSYNC

Basic idea Use HLS segmentation and rsync

CON Not all web players can do HLS

PRO Almost out of the box, robust

Howto

1. Compile fresh ffmpeg on the pi


2. Run a capture : raspivid -ih -pf baseline -t 0 -b 1000000 -w 1280 -h 720 -v -o - | ffmpeg -i - -f alsa -ac 1 -itsoffset 6.5 -i hw:1 -acodec aac -strict -2 -vcodec copy out.m3u8

3. Run a cron rsync to server (todo)

4. Connect a client (todo)


Sources

http://www.ffmpeg.org/ffmpeg-formats.html#hls

FFMPEG compilation

This installation is debian based. Some packages are included by default :

  • ffmpeg : Provides a large number of the dependencies required at compilation tim
  • yasm : modular assembler (good for compilation)
  • pkg-config : info about installed libraries (good for compilation)
  • screen : helpful for running compilation in background

For Raspberry

For the Raspberry, we only need the support of h264, AAC and ALSA

sudo -s
aptitude install screen yasl libx264-dev libasound2-dev libfdk-aac-dev ffmpeg
cd /usr/src 
git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git 
cd ffmpeg
./configure --enable-nonfree --enable-gpl --enable-libx264 --enable-libfdk-aac 
make
make install

Compiling FFMPEG for laptop or server

Your default debian might come with sufficent support but if you want total control, compiling is a good idea.

Remove packages and ffmpeg support if you don't need everything.

Ex: to produce ogg format, you only need

  • aptitude packages libtheora-dev and libvorbis-dev
  • configure options --enable-libtheora --enable-libvorbis
 
sudo 
aptitude update 
aptitude install screen pkg-config yasm ffmpeg libass-dev libavcodec-extra libfdk-aac-dev libmp3lame-dev libopus-dev libtheora-dev libx11-dev libvorbis-dev libvpx-dev libx264-dev
cd /usr/src 
git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git 
cd ffmpeg
./configure --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfreetype --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libvpx --enable-libx264 --enable-nonfree --enable-x11grab 
make
make install

References

Here are a number of unsorted links


FFMPEG


Node

Raspberry PI