Friday, September 16, 2016

Implementing Software-defined radio and Infrared Time-lapse Imaging with Tensorflow on a custom Linux distribution for the Raspberry Pi 3

GNURadio Companion Qt Gui Frequency Sync - multiple FIR filter taps
sample running on Raspberry Pi 3 custom Linux distribution
The Raspberry Pi 3 is powered by the ARM® Cortex®-A53 processor.  This 1.2GHz 64-bit quad-core processor fully supports the ARMv8-A architecture.
For this project, a custom Linux distribution was created for the Raspberry Pi 3.

The custom Linux distribution includes support for GNURadio, several FPGA and ARM Powered® SDR devices, D-STAR (hotspot, repeater, and dongle support), hsuart, libusb, hardware real-time clock support, Sony 8 megapixel NoIR image sensor, HDMI and 3.5mm audio, USB Microphone input, X-windows with xfce, lighttpd and php, bluetooth, WiFi, SSH, TCPDump, Docker, Docker registry, MySQL, Perl, Python, QT, GTK, IPTables, x11vnc, SELinux, and full native-toolchain development support.  

The Sony 8 megapixel image sensor with the infrared filter removed can be connected to the Raspberry Pi 3's MIPI camera serial interface.  Image capture and recognition can then be performed over contiguous periods of time, and time-lapsed video can be created from the images.  With support for Tensorflow and OpenCV, object recognition within images can be performed. 

D-STAR hotspot with time-lapsed infrared imaging.

For the initial run, an infrared Time-lapse Video was created from an initial image capture run of  one 3280x2460 infrared jpeg image captured every 15 seconds for three hours.  40, 5mm, 940nm LEDs, powered by 500ma over 12v DC provided infrared illumination in the 940nm wavelength.

Tensorflow was running in the background (on v4l2 kmod) and providing continuous object recognition and scoring within each image via a sample model.  Finally, OpenCV was also installed in the root file system.

The time-lapse infrared video was captured of my living room using the above setup and custom Linux distribution.  Below this image are images of Tensorflow running in a terminal in the background on the Raspberry Pi 3 and recognizing/scoring objects in my living room.

The Yocto distribution configuration file, image configuration file, custom recipes, and build configuration files are available on github at

Tensorflow running on the Raspberry Pi 3 and continuously capturing frames from the image sensor and scoring objects
GNURadio Companion running on xfce on the Raspberry Pi 3

Friday, September 2, 2016

HamPi - A Custom Linux distribution for the Raspberry Pi 3 and D-Star

The HamPi Linux distribution for the Raspberry Pi 3 is designed for amateur radio operators to communicate on the D-Star network.  This distribution differs from other HAM radio distributions for the Raspberry Pi in that all software components were compiled from their actual sources. In addition, the DVAPTool and DVAPNode software components were not used. Instead, the open source dstarrepeater software was used to communicate with the attached DVAP.  The HamPi Linux distribution is built 100% from open source software.  All components, including the HAM radio software stack which resides in userland were compiled from source.

The HamPi Linux distribution also has support for the Raspberry Pi 3 NoIr Video Camera with Sony image sensor.  Therefore; time lapsed videos can be created on the HamPi with an attached Sony image sensor.

The HamPi distribution and image recipes are available on github. I have not included the HAM radio software source recipes but may at a future time.  ircddbgateway, dummyrepeater, dstarrepeater, and ambeserver were all compiled from source code. dummyrepeater and ircddbgateway can be used without ambeserver as the latest dummyrepeater sources include support for both GPIO based devices and serial/UART devices that are attached over USB.  The functionality from DVAP Node was merged into dstarrepeater which is in turn part of the OpenDV sources. dstarrepeater contains native support for serial/UART devices that are attached via USB.  As noted above, Yocto recipes for ircddbgateway, dstarrepeater, ambeserver, and dummy repeater are not included as part of the HamPi distribution at this point.  The sources for these components are available on

The custom Linux Distribution is called the HamPi Distribution and is a Poky variant.
The custom Linux Image is called the HamPi Image.
The Machine type is raspberrypi3 for the Raspberry Pi 3.  The distribution and image recipes are available on github at

The distribution and image contain the following.
  • U-Boot 2016.03
  • GNU/Linux Kernel 4.1.21
  • Device Tree Overlay Device Customization
  • ext3
  • WiFi
  • Bluetooth
  • HDMI, 3.5mm, and USB audio output
  • USB audio input
  • I2C
  • Real-Time Clock support
  • Raspberry Pi 3 Video camera support
  • TCPDump
  • systemd
  • udev
  • devtmpfs
  • SELinux + mls ref policy
  • Docker
  • Containerd
  • IPTables
  • Native GCC compilers, make, and autotools
  • OpenSSH client and server
  • XFCE
  • Python
  • WiringPi
  • libusb
  • Perl
  • MySQL
  • OpenFlow
  • libvirt
  • GTK+3
  • Git
  • ffmpeg
  • gstreamer
  • ALSA libraries and utilities
  • Raspicam
  • Development versions of all packages.
  • License files for all software.
My call sign is KF5SVQ.

Tuesday, August 16, 2016

Profiling Multithreaded / Multiprocess Applications on the DE0-Nano-SoC with ARM® DS-5 Streamline

The ARM® DS-5 Streamline Performance Analyzer tool within ARM® DS-5 Development Studio is an optimal tool for profiling and analyzing the performance of multithreaded / multiprocess applications. Without modifying the kernel on the Terasic DE0-Nano-SoC board, the gator daemon can be compiled using the Linaro 4.8 GCC ARM Hard Float toolchain and then uploaded to the DE0-Nano-Soc board that is running the stock Terasic Yocto build off of the uSD card.

The ARM® DS-5 Streamline Performance Analyzer is a very powerful tool for looking at CPU clock cycles, instruction execution broken down between load and store operations, memory usage, register usage, disk I/O usage - read and write, per process and per thread function call paths broken down by system utilization percentage, per process and per thread stack and heap usage, and many other useful metrics.

To capture some level of meaningful information from the DS-5 Streamline tool, the process_creation project has been modified to insert 1000 packets into the packet processing simulation buffer, and the child processes have been modified to sleep and then wake up for 1000 times in order to simulate process activity.

void *insertpackets(void *arg) {
   struct pktbuf *pkbuf;
   struct packet *pkt;
   int idx;

   if(arg != NULL) {
      pkbuf = (struct pktbuf *)arg;

      /* seed random number generator */

      /* insert 1000 packets into the packet buffer */
      for(idx = 0; idx < 1000; idx++) {

         pkt = (struct packet *)malloc(sizeof(struct packet));

         if(pkt != NULL) {

            /* set the packet processing simulation multiplier to 3 */

            /* insert packet in the packet buffer */
            if(pkt_queue(pkbuf,pkt) != 0) {

int fcnb(time_t secs, long nsecs) {
   struct timespec rqtp;
   struct timespec rmtp;
   int ret;
   int idx;

   rqtp.tv_sec = secs;
   rqtp.tv_nsec = nsecs; 

   for(idx = 0; idx < 1000; idx++) {

      ret = nanosleep(&rqtp, &rmtp);


ARM® DS-5 Streamline - Profiling the process creation application.

ARM® DS-5 Streamline - Code View

Monday, August 15, 2016

Debugging Multithreaded / Multiprocess Applications on the DE0-Nano-SoC with ARM® DS-5

ARM® DS-5 is an ideal platform for debugging multithreaded, multiprocess applications on ARM Powered® development boards that run the GNU/Linux operating system.  The DE0-Nano-SoC is an ideal reference platform for developing multithreaded, multiprocess applications in Linux user space.  Yocto provides an easy to use platform for building a bootable image and ARM® DS-5 easily integrates with the board for efficient debugging.  Altera packages a version of ARM DS-5 for the DE0-Nano-SoC.

The following requirements were in place for this project.

  • Use course-grained locking strategy. Only lock data.
  • Minimize critical sections.
  • Fork five processes, all of which are attached to the controlling terminal.
  • Create three threads in one of the five processes.
  • Two of the threads will simulate packet processing.
  • One of the threads will generate packets in a buffer.
  • Properly utilize synchronization primitives and mutex locks.
  • Maximize concurrency.
  • Minimize latency.
  • Ensure order of context switching is always random upon execution - i.e. don't control the scheduler.
  • Utilize ARM DS-5 for building and debugging the application on the attached de0-Nano-SoC FPGA.
  • Use autotools for building a shared library and link against the library with a driver program in DS-5.
  • Compile the shared library and driver program using the Linaro GCC ARM-Linux-GNUEABI Hard Float toolchain version 4.8 that is included in the Altera DS-5 download.
  • Compile the shared library and test program using the Linaro GCC ARM-Linux-GNUEABI Hard Float toolchain version 5.3 (latest stable from Linaro as of 08/15/16).
  • Debug the multiprocess, multithreaded application using both toolchains from DS-5.
  • Ensure that all possible errors from calls to pthread functions and other libc functions are properly handled.

The code, which meets the above requirements, is available at
Note the DS-5 Settings in the following images.  In order to compile the code from Eclipse, a level of familiarity with DS-5 and Eclipse is required.

DS-5 disassembly / memory analysis - debugging multithreaded, multiprocess applications on ARM Powered® boards

DS-5 Debug Configurations - Files

DS-5 Debug Configurations - Connection

DS-5 Autootols Configure Settings

Synchronized Swimming.  For a description and overview of Parallel Computing, 
see an Introduction to Parallel Computing at

DS-5 Toolchain Editor

Saturday, July 30, 2016

Concurrency, Parallelism, and Barrier Synchronization - Multiprocess and Multithreaded Programming

Concurrency, parallelism, threads, and processes are often misunderstood concepts.

On a preemptive, timed sliced UNIX or Linux operating system (Solaris, AIX, Linux, BSD, OS X), sequences of program code from different software applications are executed over time on a single processor.  A UNIX process is a schedulable entity.   On a UNIX system, program code from one process executes on the processor for a time quantum, after which, program code from another process executes for a time quantum.  The first process relinquishes the processor either voluntarily or involuntarily so that another process can execute its program code. This is known as context switching.  When a process context switch occurs, the state of a process is saved to its process control block and another process resumes execution on the processor.  Finally, A UNIX process is heavyweight because it has its own virtual memory space, file descriptors, register state, scheduling information, memory management information, etc.  When a process context switch occurs, this information has to be saved, and this is a computationally expensive operation.

Concurrency refers to the interleaved execution of schedulable entities on a single processor.  Context switching facilitates interleaved execution.  The execution time quantum is so small that the interleaved execution of independent, schedulable entities, often performing unrelated tasks, gives the appearance that multiple software applications are running in parallel.

Concurrency applies to both threads and processes.  A thread is also a schedulable entitity and is defined as an independent sequence of execution within a UNIX process. UNIX processes often have multiple threads of execution that share the memory space of the process.  When multiple threads of execution are running inside of a process, they are typically performing related tasks.

While threads are typically lighter weight than processes, there have been different implementations of both across UNIX and Linux operating systems over the years.  The three models that typically define the implementations across preemptive, time sliced, multi user UNIX and Linux operating systems are defined as follows: 1:1, 1:N, and M:N where 1:1 refers to the mapping of one user space thread to one kernel thread, 1:N refers to the mapping of multiple user space threads to a single kernel thread, and M:N refers to the mapping of N user space threads to M kernel threads.

In summary, both threads and processes are scheduled for execution on a single processor.  Thread context switching is lighter in weight than process context switching.  Both threads and processes are schedulable entities and concurrency is defined as the interleaved execution over time of schedulable entities on a single processor.

The Linux user space APIs for process and thread management are abstracted from alot of the details but you can set the level of concurrency and directly influence the time quantum so that system throughput is affected by shorter and longer durations of schedulable entity execution time.

Conversely, parallelism refers to the simultaneous execution of multiple schedulable entities over a time quanta.  Both processes and threads can execute in parallel across multiple cores or multiple processors.  On a multiuser system with preemptive time slicing and multiple processor cores, both concurrency and parallelism are often at play.  Affinity scheduling refers to the scheduling of both processes and threads across multiple cores so that their concurrent and often parallel execution is close to optimal.

Software applications are often designed to solve computationally complex problems.  If the algorithm to solve a computationally complex problem can be parallelized, then multiple threads or processes can all run at the same time across multiple cores.  Each process or thread executes by itself and does not contend for resources with other threads or processes that are working on the other parts of the problem to be solved. When each thread or process reaches the point where it can no longer contribute any more work to the solution of the problem, it waits at the barrier.  When all threads or processes reach the barrier, the output of their work is synchronized, and often aggregated by the master process.  Complex test frameworks often implement the barrier synchronization problem when certain types of tests can be run in parallel.

Most individual software applications running on preemptive, time sliced, multiuser Linux and UNIX operating systems are not designed with heavy, parallel thread or parallel, multi-process execution in mind.  Expensive, parallel algorithms often require multiple, dedicated processor cores with hard real time scheduling constrains.  The following paper describes the solution to a popular, parallel algorithm; flight scheduling.

Last, when designing multithreaded and multiprocess software programs, minimizing lock granularity greatly increases concurrency, throughput, and execution efficiency.  Multithreaded and multiprocess programs that do not utilize course-grained synchronization strategies do not run efficiently and often require countless hours of debugging.  The use of semaphores, mutex locks, and other synchronization primitives should be minimized to the maximum extent possible in computer programs that share resources between multiple threads or processes.  Proper program design allows for schedulable entities to run in parallel or concurrently with high throughput and minimum resource contention, and this is optimal for solving computationally complex problems on preemptive, time scliced, multi user operating systems without requiring hard real time scheduling.

After a fairly considerable amount of research in the above areas, I utilized the above design techniques for several successful, multi threaded and multi process software programs.

Monday, June 13, 2016

VHDL Processes for Pulsing Multiple Lasers at Different Frequencies on Altera FPGA

DE1-SoC GPIO Pins connected to 780nm Infrared Laser Diodes, 660nm Red Laser Diodes, and Oscilloscope

The following VHDL processes pulse the GPIO pins at different frequencies on the Altera DE1-SoC using multiple Phase-Locked Loops.  Multiple Infrared Laser Diodes are connected to the GPIO banks and pulsed at a 50% duty cycle with 16mA across 3.3V.  Each GPIO bank on the DE1-SoC has 36 pins. Pin 1 is pulsed at 20hz from GPIO bank 0, and pins 0 and 1 are pulsed at 30hz from GPIO bank 1.  A direct mode PLL with locked output was configured using the Altera Quartus Prime MegaWizard.  The PLL reference clock frequency is set to 50mhz, the output clock frequency is set to 50mhz, and the duty cycle is set to 50%.  The pin mappings for GPIO banks 0 and 1 are documented on the DE1-SoC datasheet.

Pulsed Laser Diodes via GPIO pins on DE1-SoC FPGA

Hash: SHA1

- --Copyright (C) 2016. Bryan R. Hinton
- --All rights reserved.
- --
- --Redistribution and use in source and binary forms, with or without
- --modification, are permitted provided that the following conditions
- --are met:
- --1. Redistributions of source code must retain the above copyright
- --   notice, this list of conditions and the following disclaimer.
- --2. Redistributions in binary form must reproduce the above copyright
- --   notice, this list of conditions and the following disclaimer in the
- --   documentation  and/or other materials provided with the distribution.
- --3. Neither the names of the copyright holders nor the names of any
- --   contributors may be used to endorse or promote products derived from this
- --   software without specific prior written permission.
- --

- ---------------------
- ---------------------
- -- INPUT: direct mode pll with locked output 
- -- and reference clock frequency set to 50mhz, 
- -- output clock frequency set to 50mhz with 50% duty 
- -- cycle and output frequency scaled by freq divider constant
- -------------------------------------------------------------
clk_a_process : process (lkd_pll_clk_a)
 if rising_edge(lkd_pll_clk_a) then
  if (cycle_ctr_a < FREQ_A_DIVIDER) then
   cycle_ctr_a <= cycle_ctr_a + 1;
   cycle_ctr_a <= 0;
  end if;
 end if;
end process clk_a_process;
- ---------------------
- ---------------------
- -- INPUT: direct mode pll with locked output 
- -- and reference clock frequency set to 50mhz, 
- -- output clock frequency set to 50mhz with 50% duty 
- -- cycle and output frequency scaled by freq divider constant
- -------------------------------------------------------------
clk_b_process : process (lkd_pll_clk_b)
      if rising_edge(lkd_pll_clk_b) then
  if (cycle_ctr_b < FREQ_B_DIVIDER) then
   cycle_ctr_b <= cycle_ctr_b + 1;
   cycle_ctr_b <= 0;
          end if;
     end if;
end process clk_b_process;
- ---------------------
- ---------------------
- -- INPUT: direct mode pll with locked output
- --------------------------------------------------------- 
gpio_a_process : process (lkd_pll_clk_a)
 if rising_edge(lkd_pll_clk_a) then
         if (cycle_ctr_a = 0) then
   -- toggle gpio pin1 from gpio_0
   gpio_sig_0(1) <= NOT gpio_sig_0(1);
          end if;
       end if;
end process gpio_a_process;

- ---------------------
- ---------------------
- -- INPUT: direct mode pll with locked output
- ---------------------------------------------------------
gpio_b_process : process (lkd_pll_clk_b)
 if rising_edge(lkd_pll_clk_b) then
         if (cycle_ctr_b = 0) then
   -- toggle gpio pins 0 and 1 from gpio_1
   gpio_sig_1 <= NOT gpio_sig_1(1 downto 0);
          end if;
       end if;
end process gpio_b_process;
GPIO_0 <= gpio_sig_0;
GPIO_1 <= gpio_sig_1;

end gpioarch;
Version: GnuPG v1


DE1-SoC GPIO Bank 0 Pin 1

DE1-SoC GPIO Bank 1 Pins 0 and 1

Thursday, June 2, 2016

FPGA Audio Processing with the Cyclone V Dual-Core ARM Cortex-A9

The DE1-SoC FPGA Development board from Terasic is powered by an integrated Altera Cyclone V FPGA and ARM MPCore Cortex-A9 processor.  The FPGA and ARM core are connected by a high-speed interconnect fabric so you can boot Linux on the ARM core and then talk to the FPGA.
The below configuration was built from the Terasic Design Reference sources.

The DE1-SoC board below has been programmed via Quartus Prime running on Fedora 23, 64-bit Linux.  The FPGA bitstream was compiled from the Terasic Audio codec design reference.  After the bitstream was loaded on to the FPGA over the USB blaster II interface, the NIOS II command shell was used to load the NIOS II software image onto the chip.  A menu-driven, debug interface is running from a terminal on the host via the NIOS II shell with the target connected over the USB Blaster II interface.

A low-level hardware abstraction layer was programmed in C to configure the on-board audio codec chip.  The NIOS II chip is stored in on-chip memory and a PLL driven, clock signal is fed into the audio chip. The Verilog code for the hardware design was generated from Qsys.  The design supports configurable sample rates, mic in, and line in/out.

Additional components are connected to the DE1-SoC board in this photo.  The Linear DC934A (LTC2607) DAC is connected to the DE1-SoC and an oscilloscope is connected to the ground and vref pins on the DAC.

The DC934A features an LTC2607 16-Bit Dual DAC with i2c interface and an LTC2422 2-Channel 20-Bit uPower No Latency Delta Sigma ADC.

3.5mm audio cables are connected to the mic in and line out ports, respectively.  The DE1-SoC is connected to an external display over VGA so that a local console can be managed via a connected keyboard and mouse when Linux is booted from uSD.

With GPIO pins accessible via the GPIO 0 and 1 breakouts, external LEDs can be pulsed directly from the Hard Processor System (HPS), FPGA, or the FPGA via the HPS.