Posted on 15:55, February 1st, 2017 by Many Ayromlou
Extremely cool ascii animations in your URL bar. Works in Firefox and Chrome. Doesn’t work in Safari.
Check out the Volumeter (listen) and the shooter game (pewpew). Use cursor keys and space bar in pewpew.
I came across a DOS batch file that generates the necessary xml manifest for Video files on BrightSign’s Developer Resources and Utilities page. Since I’m serving my video content to our BrightSign infrastructure from a LAMP stack I ended up rewriting their bat file as a shell script. This enables me to host the shell script in a local directory (/usr/local/bin) and run it from cron every 5 minutes against a folder full of mp4 files and have the MRSS manifest written to this folder as well. Then all that needs to be done is to point the BrightSign MRSS widget at the URL for the xml file (http://server-FQDN.com/VideoMRSS.xml). The crontab entry looks like:
The script itself is pretty simple. I’ve tried to keep it almost as a exact copy of the dos batch file:
Posted on 18:20, June 23rd, 2015 by Many Ayromlou
Fresh after recovery, I figured I write a small piece before I go home. If you’re seeing ad pop-ups or pop-unders from sites that load up something like rhpop-xxxxxx.js then you might have the sweetcaptcha plugin installed on your wordpress site. Let’s hope you’re the one who installed it — not the hacker :-), you might want to de-activate it and remove it from your system. Their site has been compromised and the wordpress plugin has also been pulled off the list. More general info here:
IRadio = Raspberry Pi + Adafruit PiTFT (with buttons) + MPD + NCMPCPP + Bluetooth BeatsPill Speaker + Custom Frame Buffer Pygame code
Posted on 10:18, January 21st, 2015 by Many Ayromlou
You’ll need the following hardware:
The screen (PiTFT) comes pre-assembled so all you need to do is solder the 4 buttons on the bottom. This literally takes 5 minutes (8 solder points if you’re keeping track). Once that’s done plugin the screen onto the Pi.
Now grab a copy of the custom Raspbian image off Adafruits website (I’ve grabbed the 9/18/2014 image from http://adafruit-download.s3.amazonaws.com/PiTFT28R_raspbian140909_2014_09_18.zip). There might be a newer one out by the time you read this. Note that this image is only for TFT resistive touch screen NOT the capacitive screen. Unzip the above file (you’ll get a .img file) and burn it onto a SD card (mine is 16GB) following the instructions at http://elinux.org/RPi_Easy_SD_Card_Setup.
I’m plugging in the USB wifi and USB Bluetooth devices into the PI. Also plugin a wired ethernet connection (hopefully you have DHCP on it).
Let’s ssh to the Pi to get it setup (IP is on the PiTFT). I’m logging in as user pi (password raspberry……make sure you change it) and immediately switching to root.
At this point (as root) do the following tasks by running “raspy-config” and using the screenshots as reference:
Now that we have the basics configured, before we install the wifi config tool, it’s a great time to take a break and do a “apt-get update ; apt-get upgrade” cycle. Next we want to make sure that both our USB dongles are detected (WIFI, Bluetooth) by doing “lsusb”.
We now install wicd-curses using “apt-get install wicd-curses”. Then we run it and not touch anything, you need to press the right keys here. First right off the back press P(references). Note that It’s CAPITAL P.
Once things are setup like the above picture, press F10 key to save it. You might lose connection, just wait until ssh times out and re-ssh back in and sudo -i as well. Now we need to get back into wicd-curses (if the connection dropped) and find the SSID for our WIFI. Once there DO NOT press ENTER. Highlight the entry by using the cursor keys (up,down) and press the RIGHT cursor key (—>) to open the prefs for that SSID. NOTE: the WPA ½ entry has been changed as well.
Press F10 to save. You might get disconnected (you’ll see the WIFI LEDS flashing). If you got timed out, relogin and sudo -i to get root. reboot NOW. Now we need to disconnect the wired connection. This will force the system to turn on WIFI. Once the machine is booted (it might take a bit longer, since we have to wait for wired DHCP to time out), you’ll find the WIFI IP on the TFT.
SSH to the IP and get root shell. We now need to configure bluetooth. Follow along with the pics below. First though we need to install a bunch of stuff with “apt-get install bluetooth bluez bluez-utils bluez-alsa” command. Remove unnecessary services that got installed (scanner, printer, avahi) and disable their autostart.
Add the following Disable/Enable block (2 lines) to the [General] section of /etc/bluetooth/audio.conf file.
Okay we need to reboot now and once back in continue as root. First we make sure our BT device is initialized.
Then we turn on the BT speaker (BeatsPill) and put it in discover mode (hold the b button until the bluetooth LED on the back starts flashing…..might have to push the b button once before). Then we go on to discover the device.
Note it’s address. We will need to copy and paste it into the next few commands. Put the following in /etc/asound.conf paying attention to replace the MAC address of the BD device with the correct one you copied in the last step.
Once we get the output 1 from the last command it means that the device is trusted now and Linux will try to auto connect to it if the BT speaker is on when the machine is rebooted. You’ll also hear a beep when the bluetooth-agent command successfully connects. In the last step I’m just trying to establish an audio connection by restarting the bluetooth daemon (make it forget the connection) and using bluez-test-audio to connect back to the speaker. Again you’ll hear a beep when the Pi connects.
Now we need to install mpd, mpc and the curses based ncmpcpp by using “apt-get install mpd mpc ncmpcpp”.
Once that’s done copy /etc/mpd.conf to /etc/mpd.conf.old and create /etc/mpd.conf with the following content:
There are 3 audio_output sections for the internal headphone jack, bluetooth audio and a third one for ncmpcpp’s spectrum analyzer (although I don’t use it in this project).
Next we need a playlist file in m3u format. You’ll find a good one below (contains di.fm, sky.fm, CBC and somafm AAC links). Copy this into /var/lib/mpd/playlist/something.m3u (remember the filename since we will then use mpc to load it):
One last file we need to get setup is /etc/default/bluetooth and then we reboot (and make sure your BT speaker is in BT mode…..on the pill you need to press the big b button so the bluetooth LED turns on in the back).
If you’ve made it this far, during the Pi boot cycle the Beatspill should have beeped signifying a bluetooth connection. Hopefully :-). If it doesn’t take the next step with the mpc commands. If it still doesn’t work after the mpc commands (to get mpd playing some streams) then something has gone wrong. You need to troubleshoot.
Now login and get to root shell and follow along for a quick audio test using mpc command.
Hopefully you got everything working with BT and you’re hearing music now. If not, stop and do some googling. For sake of completeness I’ve put the commands I usually use to troubleshoot. Basically find the Beatspill MAC address, try connecting to it (mine barfs in the picture below since it’s already connected), disconnect and reconnect. Kick mpd since it can loose it’s mind if you take the audio interface away. Once mpd has restarted, get mpc to kick off the tunes by tuning into item 99 from the playlist (yeah I like the 80’s).
Next we need to install python-pip (to get the “pip” command). Use “apt-get install python-pip” to install it (don’t worry about version 2.6 install of python, python 2.7 will still be the default).
Okay, this next bit is a bit hairy. You need to follow it right until I finish editing /etc/modprobe.d/raspi-blacklist.conf and only then do you reboot……If you reboot like the picture shows below, you’ll end up with a white screen on the PiTFT. The Pi is still working, so worst case, put up with the white PiTFT screen, ssh, and make the three changes to the three files in /etc and give it another reboot…..and viola, PiTFT should be back and good to go. So lets update everything by doing the 3 commands in the below picture. REMEMBER DO NOT REBOOT…..KEEP GOING AND EDIT THE NEXT 3 FILES, THEN REBOOT.
Add ipv6, stmpe_device, gpio_backlight_device, gpio_keys, gpio_keys_device and btusb to /etc/modules to make the kernel load them. We need these for later when we play around with the buttons using triggerhappy service. You don’t strictly need ipv6, but heck it might be useful later.
Next we set the options for the modules we’re loading. If your took the reboot too soon and you screen went white the problem is this file. Note that I’ve commented the first line and put in the second line (I believe adafruit had their TFT named in their kernel module, but after the kernel update — above — the new module has pitft instead. There are some comments in there as well to explain the GPIO button assignment and it’s interface with triggerhappy (later).
This file just needs a single comment. You’ll see later why we need this.
Okay….still with me…..good. Now we need to configure the triggerhappy service so it can respond to the four PiTFT buttons.
I’ve configured the buttons as (from left to right):
If you want to configure your own commands you need the appropriate KEY_XXXX kernel strings and the key numbers are at this website (https://github.com/torvalds/linux/blob/master/include/uapi/linux/input.h). The numbers are options in /etc/modprobe.d/adafruit.conf file and the corresponding KEY_XXXX entries end up in /etc/triggerhappy/triggers.d/mpc.conf. More info on gpio_keys_device module is here (https://github.com/notro/fbtft_tools/wiki/gpio_keys_device). A good reference for triggerhappy and other PiTFT stuff is here (https://github.com/notro/fbtft/wiki/FBTFT-shield-image), under triggerhappy (thd) heading. Also check out the man pages for thd.
We also need to modify the system startup script for triggerhappy so that it starts as user root (by default it starts as nobody). If we don’t do this we will not be able to run our nice ui python file later on from the buttons. Make the changes to DAEMON_ARGS variable (I’ve commented the original in the pic below), change “nobody” to “root”.
In the next step we’ll reconfigure the init process to bypass login on console 1 (on PiTFT) and disable all the other consoles (we’re not using them and they waste memory). Once login has been bypassed we can freely run ncmpcpp on the PiTFT during boot. For this we need to create two files in /root. First /root/.ncmpcpp/config and then a shell script (don’t forget to chmod 700 it so you can execute it) called ncmpcpp.sh which we’ll call from /etc/inittab later.
Note: MAKE SURE YOU “chmod 700 /root/ncmpcpp.sh”, otherwise you could end up with a infinite boot loop which is not fun.
The contents of my /etc/inittab are here (be extremely careful when changing things in here).
Now, once more, before we reboot and screw it all up make sure you can run /root/ncmpcpp from command line ssh. You should see a clock. Press “q” to exit. Good…..now reboot and hopefully if you didn’t screwup you should see a nice console screen with the clock on you PiTFT.
I’m playing something here. You will need to give Wifi a chance to get settled before pressing the previous/next buttons (3rd and 4th buttons from the left).
Now lets move onto getting the second button to work (i.e. the nice pygame gui). The original source for this came from https://github.com/ISO-B/pmb-pitft. I’ve done a bunch of changes, since I use the Pi for radio only and radio stations do not adhere to the Artist – Title standard. You can use the original code, but since the program is not getting the right info the last.fm pics and the info are wrong. The other change I’ve done is show the artist picture from last.fm if the album information is missing — which is the case when you’re playing internet radio. Almost all radio stations I’ve come across show Artist – Song Title, not Album Name.
You can download my version (https://dl.dropboxusercontent.com/u/3665206/pmb-pitft.tgz) as a tar file and open it up in /home/pi/ folder using “tar -zxvf ./pmb-pitft.tgz” (assuming the file is in /home/pi to start the tar extraction).
You’ll also need the pylast library to get lastfm information, so lets install it using “apt-get install python-pylast”. Once that’s done edit the ui.py file (in /home/pi/pmb-pitft/pmb-pitft folder) and change the API_KEY, API_SECRET, username and password to your accounts credentials (API_KEY and API_SECRET are at http://www.last.fm/api/account once you login to your account).
Now let’s make sure the extraction went okay. Assuming you’ve followed my instructions if you issue “/usr/bin/python /home/pi/pmb-pitft/pmb-pitft/ui.py” from /home/pi folder, you should see a nice gui on pitft like below.
Make sure this command works, since our 2nd button on pitft is hardwired to run this command when you press it.
Okay assuming you’re still with me. One more thing we need to do is to change the console fonts to make the ncmpcpp “clock” screen a bit better looking. This is the standard procedure that’s explained in other places as well (adafruit). You need to run the command “dpkg-reconfigure console-setup” and follow the screens.
Once this process finishes, you’ll end up with a clock startup screen like this.
And again from left to right the buttons will do “Shutdown -h now”, pygame ui, Previous Song, Next Song. As I type this I’m getting bad sectors on the Pi (cheap SD card I guess), so I’m off to backup everything. You should do the same RIGHT NOW :-).
At this point you can go off exploring (You did create a backup right?). I would check out adafruits page on pitft resistive (https://learn.adafruit.com/adafruit-pitft-28-inch-resistive-touchscreen-display-raspberry-pi?view=all) and start paying attention about halfway down the page around “sudo reboot and look at the console output” paragraph (Just search for it on the page), where it starts talking about calibrating the screen for X and stuff. Frankly I found the default calibration pretty good.
For a really good web interface that literally takes 2 minutes to setup check out ympd (http://www.ympd.org/). It’s so simple (specially if you grab the precompiled executable), no configuration, no rocket science……oh and did I say it looks great :-).
Posted on 15:54, September 22nd, 2014 by Many Ayromlou
Not sure why this is such a mystery, but it took the better part of the day to troubleshoot. The main issue with forum posts is that people have the right idea/intention, but the forum software mistreats the actual command line/short code. Spacing really really (did I say really) matters. I’m assuming that you’re using the default player definition that comes with the plugin. If you need to change it then make the appropriate change to this code as well. The code to get both RTMP and HLS working depends on defining both of those sources. In my case the source is my Wowza Server and I have two URI’s:
1) For HLS I use http://wowza.server.ip.address:1935/live/many/playlist.m3u8
2) For RTMP I use rtmp://wowza.server.ip.address:1935/live/many
Obviously as you can see my Wowza application is “live” and the stream instance name is “many”. So for this to work transparently in HTML (HLS) and Flash (RTMP) environments you need the following code inserted into a post or page in wordpress (make sure you do it in Text view, NOT Visual view):
NOTE: The above code intentionally starts with [player….Please replace it with jwplayer instead. I can’t seem to put the code in properly without the plugin — installed on this site — interpreting the code as shortcode.
Also, I can not be more clear…..SPACES DO MATTER HERE…..SO PAY ATTENTION!!!
DEKTEC: DekTec introduced the DTA-2180 low profile PCIe H.264 encoder. The DTA-2180 is a low latency — 150 to 600 ms — H.264 hardware encoder based on the Magnum chipset. It supports MPEG-2 and H.264 and up to 16 channels of audio. Audio can be encoded as AC-3, AAC or MPEG-1 Layer 2. The DTA-2180 offers a 10 bit 4:2:2 option for contribution encoding.The DTA-2180 has a 3G –SDI and HDMI input and an ASI output. The compressed stream output — TS encapsulated H.264 or mpeg-2 — is also available on the PCIe for real time streaming, processing and recording.
NIMBUS: The WiMi6400T and WiMi6400R provides high quality Full HD encoding/decoding function with low latency of 40ms for encoding and decoding, each. It supports wide range of encoding rate from 1Mbps ~ 30Mbps for the high quality video for video broadcasting. WiMi6400T provides RTSP streaming server functionality. WiMi6400T also can be used as an real-time MPEG-2 TS/UDP streaming server with linear PCM audio for IPTV network. It supports one-to-many multicasting function over Ethernet LAN or IP network. So, there is no restriction on the numbers of receiver in Ethernet LAN or IP networks.
VIOLIN MEMORY: Violin Memory’s 6000 Series flash Memory Arrays are all-silicon shared storage systems built from the ground up, harnessing the power of flash memory and delivering industry-leading performance and ultra-low data access latencies. A single 3U array delivers more than 1 million IOPS with consistent, spike-free latencies in microseconds. Violin Memory is uniquely positioned to deliver flash memory systems that can compete with performance disk from a cost for raw capacity perspective, even before taking into account the potential benefits of features like deduplication. This is possible because 6000 Series flash Memory Arrays are purpose built with flash components sourced through Violin Memory’s unique and strategic alliance with industry leader Toshiba. The core of the 6000 is the Flash Memory Fabric. The Flash Memory Fabric is a resilient, highly available deep mesh of thousands of flash dies that work in concert to continuously optimize performance, latency, and longevity. All of the active components of the Flash Memory Fabric are hot-swappable for enterprise grade reliability and serviceability. 6000 Series flash Memory Arrays connect natively to existing 8Gb/s Fibre Channel, 10GE iSCSI, and 40Gb/s Infiniband network infrastructures.
TOSHIBA: ExaEdge™ by Toshiba is a next generation SSD-based edge streaming server with extra low power consumption. It allows you to stream large numbers of concurrent high quality video streaming sessions with low host CPU and memory resource utilization. ExaEdge™ adopts Toshiba’s NPEngine™, the world’s first direct SSD-to-IP embedded hardware technology. ExaEdge™ ExaEdge offers direct storage access from SSD as an embedded hardware solution, in 2RU compact-size server. The resulting performance is capable of sending up to 64,000 simultaneous sessions with the total host CPU usage at less than 12%. Modern video distribution over IP, like OTT streaming, leverage the existing HTTP-based caching functionalities. Unlike the traditional IPTV network which is basically adopting specialized network architectures, in adaptive bitrate scenarios HTTP chunks can be cached by traditional cache server at the edge to be then redistributed with lower latency.
NHK: NHK was at NAB this week, quietly showing off footage shot with a Super Hi-Vision 8K camera, affectionately known as the Cube. The Cube camera is surprisingly compact at 2 kg, since, it records to one of the only 8K HEVC real-time encoders in the world. It’s essentially a housing where the mammoth sensor and lens mount live, along with necessary connections. But even though it’s a simple design, it delivers an amazing resolution of 7680 x 4320 pixels. 8K is a great format that could rival IMAX and be excellent for big events that can be beamed around the world and give spectators who can’t make an event the opportunity to experience it in a way that all other formats before it could only dream to do. And NHK is planning on broadcasting the 2016 Summer Olympics in Rio in 8K.
4EVER: 4Ever showed demos at NAB 2014 of MPEG DASH. The DASH demo featured a way to deliver content that’s adaptive, bit-rate streaming. It has four different HEVC encodes of original 4K content that it encoded at several bit rates, including a 14.5 and 11.5 Mbps for 4K content, 5.8 and 3.7 Mbps for a 1080 version, and a 720 version of that, which can stream at 2.9 or 1.8 megabits per second. The monitor runs a Chrome browser with HTML5 support which can only show a 4K/30 frame image. To show adaptive streaming, they randomly switched from one bit stream to the other, showing this data on the monitor. The changes were seamless, but you do see a change in picture quality.
VISION 3 IMAGING: Vision III Imaging demonstrated 4K 60p parallax scanned imagery and its Real Shot™ parallax induction technology. Parallax scanning is a technique for capturing three-dimensional depth information over time using one camera and one lens. V3 imagery can be displayed on a standard display without 3D glasses or special screens. Real Shot is a parallax induction technique that also embeds three-dimensional parallax information into Internet or mobile digital advertising. Parallax scanning is accomplished using a digital parallax scanner (DPS). The DPS is a moving iris mechanism that is inserted into the optical path of a lens. When the iris is moved off the center of the lens, it records a different point of view at the plane of focus. The DPS iris scans in a circle around the center of the lens, making it possible to capture 360° of parallax information using a single lens.
RENEWED VISION: With its new Multiple Screen functionality, ProVideoPlayer 2 ($999) makes it easier than ever to create multi-screen presentations from a single computer with support for multiple graphics cards and easy mapping within each card and across multiple cards. Users can also add external graphics processors to each one of these graphics card outputs for even more screens, as well as add outputs that are not yet connected to a physical output, allowing shows to be pre-built off-site prior to the event. PVP 2 supports Multiple Layers, which afford the flexibility to create unique looks and allow the user to take full advantage of multiple screens. A layer is merely a video channel, so multiple layers are also great for a single screen environment where layering, textures, or PIPs are desired.
SILICON POWER: Silicon Power Thunder T11 is not only the lightest but also the smallest Thunderbolt™ SSD on the market. Featuring extremely small and featherweight design, Thunder T11 is half the size of ordinary storage devices and only weights 65g. Silicon Power’s Thunder T11, which enhances storage solution with Thunderbolt™ SuperSpeed I/O technology, is three times the speed of USB 3.0 HDD and delivers transfer rates up to Read/Write 380MB/340MB/sec.
360HEROS: 360 degree shooting Hexacopter using 3-D printed Go-Pro3 mounts.
ERICSSON: Showing 100 Mb/s (4x25Mb/s) live UHDTV broadcast using DVB-S2 extensions to broadcast true 4Kp60 over the air.
LACIE: The LaCie 8Big Rack is the company’s first Thunderbolt 2 rackmount storage solution, featuring up to eight 6TB 7200RPM hard drives and delivering speeds of up to 1330 MB/s. The 8big Rack also features easy access to components and tool-free maintenance of the included power supplies units, fans, and disks, all while offering a cooling system with three fans that conducts heat away from vital components. The 8big Rack will be offered in 4-disk (12TB) or 8-disk (24TB and 48TB) configurations.
SKYPE: Skype has been an essential tool in the production of podcasts and newscasts for years, and today Microsoft has announced a professional-grade version of the app designed specifically for the media industry. It’s called Skype TX and is intended to be used in studio environments; you won’t be using this to record a podcast in your bedroom. Skype TX is described as an “easy-to-use hardware and software combination that allows Skype video calls from anywhere in the world to be seamlessly integrated into any production.” It plays nice with industry standards by outputting calls in full-frame HD-SDI formats.
LIVESTREAM: Livestream announced a pair of production switchers: the HD510 and HD1710. The HD510 is a portable version with an integrated touch display, yet it’s still full featured with 5 SDI inputs. The rack mounted HD1710 is at the other end of the spectrum. It features up to 17 inputs and can drive 4 displays. They also announced Livestream Studio Control Surface a modular control surface with 5 assignable tracks, T-Bar and audio mixer and USB connection to Livestream Studio.
AJA: CION™ is the new 4K/UHD and 2K/HD production camera from AJA. Record directly to Apple ProRes 422 and 444 at up to 4K 60fps or output AJA Raw at up to 4K 120fps.
DIGITAL BOLEX: Digital Bolexs’ new monochrome 16mm camera, dubbed the D16M, has the same form factor as the original D16, but there’s a significant change under the hood. D16M sports a native black and white sensor for highest quality monochromatic capture without the need to debayer, retaining a higher sensativity to light and preserving the full dynamic range of the sensor.
Here are the technical specs:
BLACKMAGIC: The new Blackmagic 4K URSA camera is weird, featuring a 4K Super 35mm global shutter sensor, real camera form factor, a built-in 10.1″ 1920 x 1200 fold out display, and two 5” 800 x 480 displays. Not only that, but it has both interchangeable lenses and sensors, meaning you’ll be able to upgrade to a better sensor at home removing a few screws when a better one is available. Here are the specs:
Blackmagic also seeks entry into the broadcast-camera market with its newly announced Studio Camera, available in Full HD and 4K (Ultra HD) models. Designed for live broadcast applications, the Blackmagic Studio Camera sports a unique design with a massive 10″ LCD screen, built-in 4 hour battery, and a set of features you’d expect to see in large studio cameras, such as built-in talkback and tally indicators. Intended to meet the needs of a variety of live broadcast applications, the Blackmagic Studio Camera provides the connections required to fit into those environments. Connections include SDI (3G on the HD version and 12G on the 4K version) and optical fiber video inputs/outputs, XLR audio connections, reference, LANC remote control, and a 4-pin XLR power input. The camera features an active Micro Four Thirds lens mount that is compatible with a wide range of lenses via third-party adapters, opening the door for the use of common DSLR lenses to PL-mount cinema lenses, and even B4 ENG lenses.
SOLOSHOT: The surprisingly affordable soloshot 2 ($399) will follow a tracker that someone can wear or you can slap on something so you don’t have to do a thing. Put on the tracker, set up your camera with SOLOSHOT 2, and catch a wave with the perfect video. It features vertical tracking, automatic zoom, and the kit even includes a tripod for you to get started. It’s got a range up to 2,000 feet and 360 degree horizontal tracking.
BRUSHLESSGIMBALS: Gimbi™ is a lightweight, easy to carry, simple to use, power-and-go, 2 axis handheld brushless gimbal for the GoPro. With Gimbi™, you can shoot videos and photos as smooth as the pros.
JIGABOT: Jigabot’s AIMe is a pill-shaped tripod mount that automatically follows your subject—keeping it in frame—in case you’re shooting video by yourself. It uses infrared markers and swivels and tilts using complex algorithms powered by a quad-core ARM processor.
CEREVO: Crevos’ LiveWedge ($999) provides easy control via smartphone/tablet app. The rotary control unique to the app enables slow transition, which is more difficult with a physical T-Bar. LiveWedge supports PiP and chroma key as well as all the basic transitions such as wipe, fade, cut and etc. Livewedge has a SD card slot and users can record 1080/30p (H.264) Full HD Video on it while switching! You can also use videos and images from the SD card as the video source. Streaming is built into LiveWedge. 720/30p HD Live streaming and 1080p HD video switching are available in one device! Supported streaming platforms include Ustream, Youtube Live and your own servers are all supported.
PESA: PESA showed their brand new Xstream Live Streaming mobile solution, co-developed by Ryerson students. They also received the NewBay Media Best of Show Award at NAB.
COMREX: Comrex LiveShot™ delivers live video over a range of IP networks. LiveShot is used by TV stations and networks to deliver high quality, low latency (200ms) video from anywhere Internet access is available. LiveShot is especially optimized to perform well on challenging IP networks like 3G, 4G and satellite links. For optimal video quality, LiveShot encodes with H.264 HIGH profile. In addition to standard AAC audio coding, LiveShot utilizes HE-AAC and AAC-ELD audio coding, both reducing network bandwidth and lowering delay. LiveShot can encode and decode an audio/video stream with less than 200mS delay. LiveShot delivers full-duplex video and stereo audio between the field portable and studio rackmount systems. In addition, a full-duplex cue channel is available between the portable and studio units. On the portable, the return audio/video channel is delivered via output connectors. The cue channel is accessible on the portable via wired headset or Bluetooth audio to a wireless headset
PANASONIC: The Lumix GH4 camera body and its 16MP CMOS Micro Four Thirds sensor will cost $1700, while the optional YAGH pro audio/video interface unit is available for an extra $2,000. The GH4 can shoot 4K at 30/25/24fps at 100Mbps using ALL-Intra compression. At 1080p that rises way beyond broadcast standard to 200Mbps. There are two 4K formats available too: the standard 3840 x 2160 resolution at 30/25/24p, or the cinema widescreen 4096 x 2160 resolution available at 24p only. When writing to SD card the camera captures 4K video with 8-bit colour and the data rate is limited to 100Mbps. Use an optional accessory – the Panasonic DMW-YAGH, which is about as big as the GH4 body – and its four SDI ports that can be used in tandem to extract uncompressed 4K at 10-bit colour. Power input, independent volume adjustment and twin XLR sockets ensure everything a broadcast pro is here – but only via the DMW-YAGH.
The HX-A500 shoots a resolution of 3840×2160; so ultra HD. Sub 4K resolutions include 1080 up to 50p, and 720 up to 100p. Un surprisingly it shoots to an MPEG-4 AVC/H.264 codec in an .mp4 wrapper.
The camera has a perhaps slightly disappointing variable bit rate, half that of the GoPro Hero 3+. Here’s the breakdown:
The camera has a fixed focal, fixed f/2.8 aperture lens. It has a few different white balance presets including Auto / Indoor1 / Indoor2 / Sunny / Cloudy / White set. The shutter is listed as variable, from 1/25th-1/12000. The HX-A500 has an in-built image stabilizer, with an angle of view currently listed as only 160°.
JVC: JVC has now also entered the large sensor market. And that this intriguing little camera covers super35mm on an MFT mount. In terms of specs the JVC GY-LSX2 has some really intriguing figures to offer. Not only is it very small and looks very ergonomic to handle, but it offers 4K with frame rates up to 30p as well as a slow motion feature at 2K resolution that will go up to 240fps. The footage is being recorded internally with an h.264 kind of codec. The JVC GY-LSX2 is announced with a price point “under $6000″ and to come at the end of 2014.
The bigger brother, called GY-LSX1 will feature a higher framerate (60p) at 4K resolution, offer a shoulder-mount form factor and seems to come in at around twice the price of the small one.
That’s it for now……This years buzz words: 4K, UHDTV, HEVC, H.265, OTT (Over The Top)….see you all next year :-)
Posted on 17:17, March 10th, 2014 by Many Ayromlou
A little while ago our web developer asked me to look into proxmox containers and how we could take advantage of it to setup a development environment for him. The idea was to use the power of linux containers and enable him to develop fully functional/accessible sites in a private container. Here are the steps we will cover in this article:
To do all this you need to download proxmox ISO file and burn it to a CD. Go through the installation of proxmox and set up the “host” with the single pubic IP address. This is simple enough so I’m not gonna cover it here. Once you have this setup you should be able to point your browser at the IP address (https://aaa.bbb.ccc.ddd:8006). NOTE: I will use aaa.bbb.ccc.ddd as the representation of the publicly available IP throughout.
Next we need to secure access to the host to only allow connections from a specific IP address space. In my case that’s the University network — 188.8.131.52/16 — this is optional. We need to make sure ufw is installed. We also need to make sure ufw is allowing incoming connections by default and then block everything except access from the University network:
Note that I’m assuming your ssh connection to the host is via the University network (184.108.40.206/16). Make adjustments to this if it’s not, otherwise you might lock yourself out. These basic rules will plug all the holes accessible publicly and only allow connections from our University network (220.127.116.11/16).
Setting up users in proxmox is a bit weird. You have to add a regular Unix user to the proxmox host environment and then add the user to proxmox later and give it permissions and roles. Here I will use a user “myadmin” to create something for our web developer to use.
This will create a account “myadmin”, join it to primary group “myadmin”, assign it /bin/bash as shell and make it part of the group “sudo” — which will allow the user to use the sudo command in the future. Next on the proxmox web interface we need to create a Admin group called “Admin”. In the proxmox interface we click on the Datacentre in the left pane and go to Groups and click the Create button. Call the group “Admin”. Now go to Permissions tab in the right pane. We need to create a Administrator Group Permission to assign to our “Admin” group. Click Add Group Permission (right below the tabs in this pane) and fill it in like below:
In this window the path: / means the entire Datacentre (including the host and the containers/VM’s). You might want to adjust this. The Role “Administrator” is a predefined role that is pretty much the same as root. Now that our group “Admin” has the “Administrator” role for the entire Datacentre, we want to make the user “myadmin” — which is a unix account right now — be part of that, effectively creating another “root” account for our web developer. So back to the Users tab we click Add and create our new user (really just add the Unix user to proxmox):
Okay, so now test and make sure you can access the host via ssh using myadmin as user, also make sure you can sudo to root on the host and check the web interface and ensure the myadmin account can login and see all the goodies in the data centre. Otherwise stop and fix.
At this point login/ssh to the host as root or myadmin (plus “sudo -i” to become root). We need to modify the networking config in /etc/network/interfaces to setup all the masquerading jazz. Make a back up of your interfaces file first and note the public IP address that is in there (I’m gonna use aaa.bbb.ccc.ddd as my public address here). Once you have a backup replace everything in the file with the following:
So in the above I’m creating a separate private network (10.10.10.0/24) behind the publicly available IP address aaa.bbb.ccc.ddd and am doing some iptables commands to setup masquerading. This is sorta like setting up a home router to share a publicly available IP address you have at home. Once this is in place reboot the host and make sure you can log back into https://aaa.bbb.ccc.ddd:8006/ and get the proxmox interface. If you’re good to go, as next step spin off two Ubuntu containers (I won’t go into details on this…..lots of docs out there for this). Your OperVZ Container confirmation screen should look something like this:
The only really important thing here is that you setup the networking under Network tab as Bridged mode and select vmbr0 as your bridge. Once that’s done ssh back to your host (aaa.bbb.ccc.ddd). Assuming you have two containers 100 and 101, enter one of them by using the vzctl command:
Once inside the container you need to setup the networking. Again the file here is /etc/network/interfaces (assuming you’re container is Ubuntu/Debian flavoured). Backup this file first and replace the content with the following:
Note here that I’m using google’s name server. You can use that or substitute your own “real” name servers. Once you reboot the container and enter it again via the host, you should be able to ping just about any real host (www.google.com, www.yahoo.com or whatever). This gives us a basic NAT running on the host and you just need to increment the IP address (10.10.10.2 in the above case) in the setup of the second container. At this point you should be able to enter either containers and ping something outside.
So the rest of this article describes how to setup a secure reverse proxy using apache on the proxmox host (aaa.bbb.ccc.ddd). This way you can just point arbitrary DNS names at aaa.bbb.ccc.ddd and choose (via apache config) which one of your containers will answer the call. You can even get fancy and have multiple hostnames proxied to the same container and do standard “Name based” virtual hosting inside the container. I will just show the one-to-one proxied connection here. Start by installing apache on the host (apt-get install apache). First we need to activate the proxy module. If you don’t have time to finish this entire procedure DO NOT CONTINUE. Literally in the time it takes to install and configure the proxy, script kiddies will hit your site and use you as a proxy to attack other sites. DO THE PROXY INSTALL AND CONFIG/SECURING PROCEDURE IN ONE SHOT.
Assuming apache is installed go to http://aaa.bbb.ccc.ddd and ensure you’re getting the apache “hello” screen. Now you can enable the three modules needed by issuing the following:
Once that’s done you need to make some changes to your proxmox hosts default apache config which is in /etc/apache2/sites-available/default. For the sake of completeness I’ve included my entire file here. Compare it to yours and modify accordingly:
Pay particular attention to parts that have the comment (# IMPORTANT: YOU NEED THIS)……Guess what…..YOU NEED THIS. The first one loads libxml2 which is needed. The second block of code makes sure you are in reverse proxy mode (not in forward proxy) and makes sure the main apache instance can’t be used for proxing. The third and fourth block enable reverse proxy for a particular virtual host name. Now we need to reload apache on our proxmox host and do some testing. Reload apache with (service apache2 reload) and for sanity sake change the index.html file in both containers (under /var/www/index.html) to reflect hosta and hostb. I’ve basically just added the words hosta and hostb to the html file. Register hosta.domain.ca and hostb.domain.ca as “A” fields in your DNS and point them at the IP address of the proxmox host (aaa.bbb.ccc.ddd).
If everything is working properly you should be able to use your browser and point at http://hosta.domain.ca and get the index.html page specific to that container and the same for hostb. At this point you should be more or less good to go. If you need more containers addressable from internet, just keep adding this block of code to the proxmox hosts /etc/apache2/sites-available/default and change the hostname and increment the private IP addresses:
Optionally you can now go back and add a couple more ufw rules to only allow access from a particular IP address space (in my case the university network 18.104.22.168/16)
Again with this setup — since we’re preserving the request header and are passing it through the proxy back and forth — you can have hostd, hoste, hostf, all point to the same private IP address in the proxy and do a named virtual serving on the apache instance in the particular container, just like a standard named virtual host based setup. Hope this helps…..
Posted on 15:35, November 24th, 2013 by Many Ayromlou
Proving the Network is Not the Problem With iperf – Packet Life: “When an application fails to perform as expected, the network is often the first thing blamed. I suppose this is because end users typically view the network as the sole limiting factor with regard to throughput, unaware of the intricacies of application, database, and storage performance. For some reason, the burden of proof always seems to fall onto networkers to demonstrate that the network is not at fault before troubleshooting can begin elsewhere. This article demonstrates how to simulate user traffic between two given points on a network and measure the achievable throughput.”
Posted on 10:15, November 23rd, 2013 by Many Ayromlou
Manipulating the Clipboard from the Command Line: “Copy and Paste are absolute necessities for virtually all computer users, and if you find yourself working in the command line frequently, you’ll want to know how to manipulate the clipboard. The commands pbcopy and pbpaste do exactly what they sound like, copy and paste through the command line. They’re actually quite powerful and you’ll be sure to find them useful the next time you’re hanging out with your bash prompt.”
Posted on 16:53, November 21st, 2013 by Many Ayromlou
For those that are not familiar, Screenly is a Digital Signage System for the Pi. There is a open source edition of it (OSE) that you can just download and install on your own SD card. I’ve been messing around with it for the past few days and it’s surprisingly simple and powerful. Below are some notes on how to fix various annoyances: