Sunday, January 25, 2026

Spectral Witness: EPR Pairs and the Physics of Light

The Quantum Nature of Observation

The interrogation of physical reality through the medium of light remains one of the most profound endeavors of scientific inquiry. This pursuit traces its modern theoretical roots to the mid-20th century, a pivotal era for physics. In 1935, Albert Einstein and his colleagues Boris Podolsky and Nathan Rosen published a seminal paper that challenged the completeness of quantum mechanics. They introduced the concept of EPR pairs to describe quantum entanglement, where particles remain inextricably linked, their states correlated regardless of spatial separation.

While their debate centered on quantum mechanics, the core philosophy resonates with modern imaging where standard observation is often incomplete. Just as the naked eye perceives only a fraction of the electromagnetic spectrum, standard RGB sensors discard the high-dimensional "fingerprint" that defines the chemical and physical properties of a subject. Today, we resolve this limitation through multispectral imaging. By capturing the full spectral power distribution of light, we can mathematically reconstruct the invisible data that exists between the visible bands, revealing hidden correlations across wavelength, just as EPR revealed hidden correlations across space.

Silicon Photonic Architecture: The 48MP Foundation

The realization of this physics in modern hardware is constrained by the physical dimensions of the semiconductor used to capture it. The interaction of incident photons with the silicon lattice, generating electron-hole pairs, is the primary data acquisition step for any spectral analysis.

Sensor Architecture: Sony IMX803

The core of this pipeline is the Sony IMX803 sensor. Contrary to persistent rumors of a 1-inch sensor, this is a 1/1.28-inch type architecture, optimized for high-resolution radiometry.

  • Active Sensing Area: Approximately \(9.8 \text{ mm} \times 7.3 \text{ mm}\). This physical limitation is paramount, as the sensor area is directly proportional to the total photon flux the device can integrate, setting the fundamental Signal-to-Noise Ratio (SNR) limit.
  • Pixel Pitch: The native photodiode size is \(1.22 \, \mu\text{m}\). In standard operation, the sensor utilizes a Quad-Bayer color filter array to perform pixel binning, resulting in an effective pixel pitch of \(2.44 \, \mu\text{m}\).

Mode Selection

The choice between binned and unbinned modes depends on the analysis requirements:

  • Binned mode (12MP, 2.44 µm effective pitch): Superior for low-light conditions and spectral estimation accuracy. By summing the charge from four photodiodes, the signal increases by a factor of 4, while read noise increases only by a factor of 2, significantly boosting the SNR required for accurate spectral estimation.
  • Unbinned mode (48MP, 1.22 µm native pitch): Optimal for high-detail texture correlation where spatial resolution drives the analysis, such as resolving fine fiber patterns in historical documents or detecting micro-scale material boundaries.

The Optical Path

The light reaching the sensor passes through a 7-element lens assembly with an aperture of ƒ/1.78. It is critical to note that "Spectral Fingerprinting" measures the product of the material's reflectance \(R(\lambda)\) and the lens's transmittance \(T(\lambda)\). Modern high-refractive-index glass absorbs specific wavelengths in the near-UV (less than 400nm), which must be accounted for during calibration.

The Digital Container: DNG 1.7 and Linearity

The accuracy of computational physics depends entirely on the integrity of the input data. The Adobe DNG 1.7 specification provides the necessary framework for scientific mobile photography by strictly preserving signal linearity.

Scene-Referred Linearity

Apple ProRAW utilizes the Linear DNG pathway. Unlike standard RAW files, which store unprocessed mosaic data, ProRAW stores pixel values after demosaicing but before non-linear tone mapping. The data remains scene-referred linear, meaning the digital number stored is linearly proportional to the number of photons collected (\(DN \propto N_{photons}\)). This linearity is a prerequisite for the mathematical rigor of Wiener estimation and spectral reconstruction.

The ProfileGainTableMap

A key innovation in DNG 1.7 is the ProfileGainTableMap (Tag 0xCD2D). This tag stores a spatially varying map of gain values that represents the local tone mapping intended for display.

  • Scientific Stewardship: By decoupling the "aesthetic" gain map from the "scientific" linear data, the pipeline can discard the gain map entirely. This ensures that the spectral reconstruction algorithms operate on pure, linear photon counts, free from the spatially variant distortions introduced by computational photography.

Algorithmic Inversion: From 3 Channels to 16 Bands

Recovering a high-dimensional spectral curve \(S(\lambda)\) (e.g., 16 channels from 400nm to 700nm) from a low-dimensional RGB input is an ill-posed inverse problem. While traditional methods like Wiener Estimation provide a baseline, modern high-end hardware enables the use of advanced Deep Learning architectures.

Wiener Estimation (The Linear Baseline)

The classical approach utilizes Wiener Estimation to minimize the mean square error between the estimated and actual spectra:

$$W = K_r M^T (M K_r M^T + K_n)^{-1}$$

This method generates the initial 16-band approximation from the 3-channel input.

State-of-the-Art: Transformers and Mamba

For high-end hardware environments, we can utilize predictive neural architectures that leverage spectral-spatial correlations to resolve ambiguities.

  • MST++ (Spectral-wise Transformer): The MST++ (Multi-stage Spectral-wise Transformer) architecture represents a significant leap in accuracy. Unlike global matrix methods, MST++ utilizes Spectral-wise Multi-head Self-Attention (S-MSA). It calculates attention maps across the spectral channel dimension, allowing the model to learn complex non-linear correlations between texture and spectrum. Hardware Demand: The attention mechanism scales quadratically \(O(N^2)\), requiring significant GPU memory (VRAM) for high-resolution images. This computational intensity necessitates powerful dedicated hardware to process the full data arrays.
  • MSS-Mamba (Linear Complexity): The MSS-Mamba (Multi-Scale Spectral-Spatial Mamba) model introduces Selective State Space Models (SSM) to the domain. It discretizes the continuous state space equation into a recurrent form that can be computed with linear complexity \(O(N)\). The Continuous Spectral-Spatial Scan (CS3) strategy integrates spatial neighbors and spectral channels simultaneously, effectively "reading" the molecular composition in a continuous stream.

Computational Architecture: The Linux Python Stack

Achieving multispectral precision requires a robust, modular architecture capable of handling massive arrays across 16 dimensions. The implementation relies on a heavy Linux-based Python stack designed to run on high-end hardware.

  • Ingestion and Processing: We can utilize rawpy (a LibRaw wrapper) for the low-level ingestion of ProRAW DNG files, bypassing OS-level gamma correction to access the linear 12-bit data directly. NumPy engines handle the high-performance matrix algebra required to expand 3-channel RGB data into 16-band spectral cubes.
  • Scientific Analysis: Scikit-image and SciPy are employed for geometric transforms, image restoration, and advanced spatial filtering. Matplotlib provides the visualization layer for generating spectral signature graphs and false-color composites.
  • Data Footprint: The scale of this operation is significant. A single 48.8MP image converted to floating-point precision results in massive file sizes. Intermediate processing files often exceed 600MB for a single 3-band layer. When expanded to a full 16-band multispectral cube, the storage and I/O requirements scale proportionally, necessitating the stability and memory management capabilities of a Linux environment.

Completing the Picture

The successful analysis of complex material properties relies on a convergence of rigorous physics and advanced computation.

  • Photonic Foundation: The Sony IMX803 provides the necessary high-SNR photonic capture, with mode selection (binned vs. unbinned) driven by the specific analytical requirements of each examination.
  • Data Integrity: DNG 1.7 is the critical enabler, preserving the linear relationship between photon flux and digital value while sequestering non-linear aesthetic adjustments in metadata.
  • Algorithmic Precision: While Wiener estimation serves as a fast approximation, the highest fidelity is achieved through Transformer (MST++) and Mamba-based architectures. These models disentangle the complex non-linear relationships between visible light and material properties, effectively generating 16 distinct spectral bands from 3 initial channels.
  • Historical Continuity: The EPR paradox of 1935 revealed that quantum particles share hidden correlations across space, correlations invisible to local measurement but real nonetheless. Modern spectral imaging reveals an analogous truth: materials possess hidden correlations across wavelength, invisible to trichromatic vision but accessible through mathematical reconstruction. In both cases, completeness requires looking beyond what direct observation provides.

This synthesis of hardware specification, file format stewardship, and deep learning reconstruction defines the modern standard for non-destructive material analysis, a spectral witness to what light alone cannot tell us.

Friday, January 16, 2026

The Unbroken Identity: Quantum-Safe Resistance

Remembrance is the act of ensuring that truth remains immutable over time. In the physical world, we rely on archives to preserve our stories. In the digital world, we rely on cryptography to preserve identity, authorship, and trust. 

A new threat from quantum computing now challenges that foundation. At scale, it will be capable of erasing or forging the cryptographic records that define our digital lives.

To protect the integrity of collective memory, and to ensure that identity cannot be harvested by future adversaries, I have moved beyond legacy cryptographic standards and implemented the highest level of post-quantum security available today.

The Dual Threat: Shor and Grover

Quantum computing introduces two distinct mathematical threats to modern cryptography. Understanding the shift to post-quantum standards requires understanding both.

Shor's Algorithm: The Public Key Breaker

Shor's algorithm is the existential threat. It efficiently solves the integer factorization and discrete logarithm problems that underpin nearly all classical public key cryptography, including RSA, Diffie-Hellman, and elliptic curve systems (ECC). This is not a degradation; it is a total break. A sufficiently powerful quantum computer can derive a private key from a public key, rendering classical identity systems fundamentally unsafe.

Grover's Algorithm: The Symmetric Squeezer

Grover's algorithm targets symmetric cryptography and hash functions. It provides a quadratic speedup for brute force searches, effectively halving the security strength of a key. This is why AES-256 matters: even after Grover's reduction, it retains 128 bits of effective security, which remains computationally infeasible to break.

The Practical Consequence: Store Now, Decrypt Later

The most immediate danger is the Store Now, Decrypt Later (SNDL) attack. Encrypted traffic, identity assertions, certificates, and signatures can be harvested today while classical cryptography still holds, then stored indefinitely. Once quantum capability matures, those archives can be retroactively decrypted or forged. If our cryptographic foundations fail, our ability to bear witness to our own digital history fails with them.

Moving Beyond Legacy Standards: Why ML-DSA-87

For years, the gold standard in high security environments was elliptic curve cryptography, particularly P-384 (ECDSA). While P-384 provides roughly 192 bits of classical security, it offers zero resistance to Shor's algorithm. It was designed for a classical world, and that world is ending.

For this reason, I have implemented ML-DSA-87 for root CA and signing operations. ML-DSA-87 is the highest security tier defined by modern lattice-based standards, providing Category 5 security, computationally equivalent to AES-256. Choosing this level, rather than the more common ML-DSA-65, ensures that the identity of my network is built with the largest possible security margin available today.

Hardware Reality: AArch64 and the PQC Load

Post-quantum cryptography is no longer theoretical. It is deployable now, even on routers and mobile class hardware. I am running a custom OpenSSL 3.5.0 build on an AArch64 MediaTek Filogic 830/880 platform. This SoC is unusually well suited for post-quantum workloads.

Vector Scaling with NEON

ML-KEM and ML-DSA rely heavily on polynomial arithmetic. ARM NEON vector instructions allow these operations to be performed in parallel, significantly reducing TLS handshake latency even with large PQ key material.

Memory Efficiency

Post-quantum keys are large. An ML-KEM-1024 public key is 1568 bytes versus 49 bytes for P-384. The 64-bit address space of AArch64 allows these buffers to be managed cleanly, avoiding the fragmentation and pressure issues seen on older architectures.

Technical Verification: Post-Quantum CLI Checks

After installing the custom toolchain on the AArch64 target, the post-quantum stack can be verified directly.

KEM Verification

openssl list -kem-algorithms

Expected Output:

ml-kem-1024
secp384r1mlkem1024 (high-security hybrid)

Signature Verification

openssl list -signature-algorithms | grep -i ml

Expected Output:

ml-dsa-87 (256-bit security)

The presence of these algorithms confirms that the platform supports both post-quantum key exchange (ML-KEM-1024) and quantum-resistant signatures (ML-DSA-87).

Summary: My AArch64 Post-Quantum Stack

  • Library: OpenSSL 3.5.4 (custom AArch64 build)
  • SoC: MediaTek Filogic 830 / 880
  • Architecture: ARMv8-A (AArch64)
  • Key Exchange: ML-KEM-1024 + hybrids
  • Identity & Signing: ML-DSA-87
  • Security Tier: Level 5 (quantum-ready)
  • Status: Production-ready

By moving directly to ML-KEM-1024 and ML-DSA-87, I have bypassed the legacy bottlenecks of the past decade. My network is no longer preparing for the quantum transition; it has already crossed it. The rest of the industry will follow in time.

Tuesday, November 25, 2025

rk3588 bring-up: u-boot, kernel, and signal integrity

The RK3588 SoC features a quad-core Arm Cortex-A76/A55 CPU, a Mali-G610 GPU, and a highly flexible I/O architecture that makes it ideal for embedded Linux SBCs like the Radxa Rock 5B+.

I’ve been exploring and documenting board bring-up for this platform, including u-boot and Linux kernel contributions, device-tree development, and tooling for reproducible builds and signal-integrity validation. Most of this work is still in active development and early upstream preparation.

I’m publishing my notes, measurements, and bring-up artifacts here as the work progresses, while active u-boot and kernel development including patch iteration, test builds, and branch history are maintained in separate working repositories:

Signal Analysis / Bring-Up Repo: https://github.com/brhinton/signal-analysis

The repository currently includes (with more being added):

  • Device-tree sources and Rock 5B+ board enablement
  • UART signal-integrity captures at 1.5 Mbps measured at the SoC pad
  • Build instructions for kernel, bootloader, and debugging setup
  • Early patch workflows and upstream preparation notes

Additional U-Boot and Linux kernel work, including mainline test builds, feature development, rebases, and patch series in progress, is maintained in separate working repositories. This repo serves as the central location for measurements, documentation, and board-level bring-up notes.

This is ongoing, work-in-progress engineering effort, and I’ll be updating the repositories as additional measurements, boards, and upstream-ready changes are prepared.

Sunday, August 4, 2024

arch linux uefi with dm-crypt and uki

Arch Linux is known for its high level of customization, and configuring LUKS2 and LVM is a straightforward process. This guide provides a set of instructions for setting up an Arch Linux system with the following features:

  • Root file system encryption using LUKS2.
  • Logical Volume Management (LVM) for flexible storage management.
  • Unified Kernel Image (UKI) bootable via UEFI.
  • Optional: Detached LUKS header on external media for enhanced security.

Prerequisites

  • A bootable Arch Linux ISO.
  • An NVMe drive (e.g., /dev/nvme0n1).
  • (Optional) A microSD card or other external medium for the detached LUKS header.

Important Considerations

  • Data Loss: The following procedure will erase all data on the target drive. Back up any important data before proceeding.
  • Secure Boot: This guide assumes you may want to use hardware secure boot.
  • Detached LUKS Header: Using a detached LUKS header on external media adds a significant layer of security. If you lose the external media, you will lose access to your encrypted data.
  • Swap: This guide uses a swap file. You may also use a swap partition if desired.

Step-by-Step Instructions

  1. Boot into the Arch Linux ISO:

    Boot your system from the Arch Linux installation media.

  2. Set the System Clock:

    # timedatectl set-ntp true
  3. Prepare the Disk:

    • Identify your NVMe drive (e.g., /dev/nvme0n1). Use lsblk to confirm.
    • Wipe the drive:
    • # wipefs --all /dev/nvme0n1
    • Create an EFI System Partition (ESP):
    • # sgdisk /dev/nvme0n1 -n 1::+512MiB -t 1:EF00
    • Create a partition for the encrypted volume:
    • # sgdisk /dev/nvme0n1 -n 2 -t 2:8300
  4. Set up LUKS2 Encryption:

    Encrypt the second partition using LUKS2. This example uses aes-xts-plain64 and serpent-xts-plain ciphers, and SHA512 for the hash. Adjust as needed.

    # cryptsetup luksFormat --cipher aes-xts-plain64 \
      --keyslot-cipher serpent-xts-plain --keyslot-key-size 512 \
      --use-random -S 0 -h sha512 -i 4000 /dev/nvme0n1p2
    • --cipher: Specifies the cipher for data encryption.
    • --keyslot-cipher: Specifies the cipher used to encrypt the key.
    • --keyslot-key-size: Specifies the size of the key slot.
    • -S 0: Disables sparse headers.
    • -h: Specifies the hash function.
    • -i: Specifies the number of iterations.

    Open the encrypted partition:

    # cryptsetup open /dev/nvme0n1p2 root
  5. Create the File Systems and Mount:

    Create an ext4 file system on the decrypted volume:

    # mkfs.ext4 /dev/mapper/root

    Mount the root file system:

    # mount /dev/mapper/root /mnt

    Create and mount the EFI System Partition:

    # mkfs.fat -F32 /dev/nvme0n1p1
    # mount --mkdir /dev/nvme0n1p1 /mnt/efi

    Create and enable a swap file:

    # dd if=/dev/zero of=/mnt/swapfile bs=1M count=8000 status=progress
    # chmod 600 /mnt/swapfile
    # mkswap /mnt/swapfile
    # swapon /mnt/swapfile
  6. Install the Base System:

    Use pacstrap to install the necessary packages:

    # pacstrap -K /mnt base base-devel linux linux-hardened \
      linux-hardened-headers linux-firmware apparmor mesa \
      xf86-video-intel vulkan-intel git vi vim ukify
  7. Generate the fstab File:

    # genfstab -U /mnt >> /mnt/etc/fstab
  8. Chroot into the New System:

    # arch-chroot /mnt
  9. Configure the System:

    Set the timezone:

    # ln -sf /usr/share/zoneinfo/UTC /etc/localtime
    # hwclock --systohc

    Uncomment en_US.UTF-8 UTF-8 in /etc/locale.gen and generate the locale:

    # sed -i 's/#'"en_US.UTF-8"' UTF-8/'"en_US.UTF-8"' UTF-8/g' /etc/locale.gen
    # locale-gen
    # echo 'LANG=en_US.UTF-8' > /etc/locale.conf
    # echo "KEYMAP=us" > /etc/vconsole.conf

    Set the hostname:

    # echo myhostname > /etc/hostname
    # cat <<EOT >> /etc/hosts
    127.0.0.1 myhostname
    ::1 localhost
    127.0.1.1 myhostname.localdomain myhostname
    EOT

    Configure mkinitcpio.conf to include the encrypt hook:

    # sed -i 's/HOOKS.*/HOOKS=(base udev autodetect modconf kms \
      keyboard keymap consolefont block encrypt filesystems resume fsck)/' \
      /etc/mkinitcpio.conf

    Create the initial ramdisk:

    # mkinitcpio -P

    Install the bootloader:

    # bootctl install

    Set the root password:

    # passwd

    Install microcode and efibootmgr:

    # pacman -S intel-ucode efibootmgr

    Get the swap offset:

    # swapoffset=`filefrag -v /swapfile | awk '/\s+0:/ {print $4}' | \
      sed -e 's/\.\.$//'`

    Get the UUID of the encrypted partition:

    # blkid -s UUID -o value /dev/nvme0n1p2

    Create the EFI boot entry. Replace <UUID OF CRYPTDEVICE> with the actual UUID:

    # efibootmgr --disk /dev/nvme0n1p1 --part 1 --create --label "Linux" \
      --loader /vmlinuz-linux --unicode "cryptdevice=UUID=<UUID OF CRYPTDEVICE>:root \
      root=/dev/mapper/root resume=/dev/mapper/root resume_offset=$swapoffset \
      rw initrd=\intel-ucode.img initrd=\initramfs-linux.img" --verbose

    Configure the UKI presets:

    # cat <<EOT >> /etc/mkinitcpio.d/linux.preset
    ALL_kver="/boot/vmlinuz-linux"
    ALL_microcode=(/boot/*-ucode.img)
    PRESETS=('default' 'fallback')
    default_uki="/efi/EFI/Linux/arch-linux.efi"
    default_options="--splash /usr/share/systemd/bootctl/splash-arch.bmp"
    fallback_uki="/efi/EFI/Linux/arch-linux-fallback.efi"
    fallback_options="-S autodetect"
    EOT

    Create the UKI directory:

    # mkdir -p /efi/EFI/Linux

    Configure the kernel command line:

    # cat <<EOT >> /etc/kernel/cmdline
    cryptdevice=UUID=<UUID OF CRYPTDEVICE>:root root=/dev/mapper/root \
    resume=/dev/mapper/root resume_offset=51347456 rw
    EOT

    Build the UKIs:

    # mkinitcpio -p linux

    Configure the kernel install layout:

    # echo "layout=uki" >> /etc/kernel/install.conf
  10. Configure Networking (Optional):

    Create a systemd-networkd network configuration file:

    # cat <<EOT >> /etc/systemd/network/nic0.network
    [Match]
    Name=nic0
    [Network]
    DHCP=yes
    EOT
  11. Install a Desktop Environment (Optional):

    Install Xorg, Xfce, LightDM, and related packages:

    # pacman -Syu
    # pacman -S xorg xfce4 xfce4-goodies lightdm lightdm-gtk-greeter \
      libva-intel-driver mesa xorg-server xorg-xinit sudo
    # systemctl enable lightdm
    # systemctl start lightdm
  12. Enable Network Services (Optional):

    # systemctl enable systemd-resolved.service
    # systemctl enable systemd-networkd.service
    # systemctl start systemd-resolved.service
    # systemctl start systemd-networkd.service
  13. Create a User Account:

    Create a user account and add it to the wheel group:

    # useradd -m -g wheel -s /bin/bash myusername
  14. Reboot:

    Exit the chroot environment and reboot your system:

    # exit
    # umount -R /mnt
    # reboot

Saturday, April 6, 2024

Multidimensional arrays of function pointers in C

Embedded hardware typically includes an application processor and one or more adjacent processor(s) attached to the printed circuit board. The firmware that resides on the adjacent processor(s) responds to instructions or commands.  Different processors on the same board are often produced by different companies.  For the system to function properly, it is imperative that the processors communicate without any issues, and that the firmware can handle all types of possible errors.

Formal requirements for firmware related projects may include the validation and verification of the firmware on a co-processor via the application programming interface (API).  Co-processors typically run 8, 16, or 32-bit embedded operating systems.  If the co-processor manufacturer provides a development board for testing the firmware on a specific co-processor, then the development board may have it's own application processor. Familiarity with all of the applicable bus communication protocols including synchronous and asynchronous communication is important.  High-volume testing of firmware can be accomplished using function-like macros and arrays of function pointers.  Processor specific firmware is written in C and assembly - 8, 16, 32, or 64-bit.  Executing inline assembly from C is straightforward and often required.  Furthermore, handling time-constraints such as real-time execution on adjacent processors is easier to deal with in C and executing syscalls, low-level C functions, and userspace library functions, is often more efficient.  Timing analysis is often a key consideration when testing firmware, and executing compiled C code on a time-sliced OS, such as Linux, is already constrained.

To read tests based on a custom grammar, a scanner and parser in C can be used. Lex is ideal for building a computationally efficient lexical analyzer that outputs a sequence of tokens. For this case, the tokens comprise the function signatures and any associated function metadata such as expected execution time. Creating a context-free grammar and generating the associated syntax tree from the lexical input is straightforward.   Dynamic arrays of function pointers can then be allocated at run-time, and code within external object files or libraries can be executed in parallel using multiple processes or threads. The symbol table information from those files can be stored in multi-dimensional arrays. While C is a statically typed language, the above design can be used for executing generic, variadic functions at run-time from tokenized input, with constant time lookup, minimal overhead, and specific run-time expectations (stack return value, execution time, count, etc.).

At a high level, lists of pointers to type-independent, variadic functions and their associated parameters can be stored within multi-dimensional arrays.  The following C code uses arrays of function pointers to execute functions via their addresses.  The code uses list management functions from the Linux kernel which I ported to userspace.

https://github.com/brhinton/bcn

Wednesday, January 12, 2022

Concurrency, Parallelism, and Barrier Synchronization - Multiprocess and Multithreaded Programming

On preemptive, timed-sliced UNIX or Linux operating systems such as Solaris, AIX, Linux, BSD, and OS X, program code from one process executes on the processor for a time slice or quantum. After this time has elapsed, program code from another process executes for a time quantum. Linux divides CPU time into epochs, and each process has a specified time quantum within an epoch. The execution quantum is so small that the interleaved execution of independent, schedulable entities – often performing unrelated tasks – gives the appearance of multiple software applications running in parallel.

When the currently executing process relinquishes the processor, either voluntarily or involuntarily, another process can execute its program code. This event is known as a context switch, which facilitates interleaved execution. Time-sliced, interleaved execution of program code within an address space is known as concurrency.

The Linux kernel is fully preemptive, which means that it can force a context switch for a higher priority process. When a context switch occurs, the state of a process is saved to its process control block, and another process resumes execution on the processor.

A UNIX process is considered heavyweight because it has its own address space, file descriptors, register state, and program counter. In Linux, this information is stored in the task_struct. However, when a process context switch occurs, this information must be saved, which is a computationally expensive operation.

Concurrency applies to both threads and processes. A thread is an independent sequence of execution within a UNIX process, and it is also considered a schedulable entity. Both threads and processes are scheduled for execution on a processor core, but thread context switching is lighter in weight than process context switching.

In UNIX, processes often have multiple threads of execution that share the process's memory space. When multiple threads of execution are running inside a process, they typically perform related tasks. The Linux user-space APIs for process and thread management abstract many details. However, the concurrency level can be adjusted to influence the time quantum so that the system throughput is affected by shorter and longer durations of schedulable entity execution time.

While threads are typically lighter weight than processes, there have been different implementations across UNIX and Linux operating systems over the years. The three models that typically define the implementations across preemptive, time-sliced, multi-user UNIX and Linux operating systems are defined as follows - 1:1, 1:N, and M:N where 1:1 refers to the mapping of one user-space thread to one kernel thread, 1:N refers to the mapping of multiple user-space threads to a single kernel thread. M:N refers to the mapping of N user-space threads to M kernel threads.

In the 1:1 model, one user-space thread is mapped to one kernel thread. This allows for true parallelism, as each thread can run on a separate processor core. However, creating and managing a large number of kernel threads can be expensive.

In the 1:N model, multiple user-space threads are mapped to a single kernel thread. This is more lightweight, as there are fewer kernel threads to create and manage. However, it does not allow for true parallelism, as only one thread can execute on a processor core at a time.

In the M:N model, N user-space threads are mapped to M kernel threads. This provides a balance between the 1:1 and 1:N models, as it allows for both true parallelism and lightweight thread creation and management. However, it can be complex to implement and can lead to issues with load balancing and resource allocation.

Parallelism on a time-sliced, preemptive operating system means the simultaneous execution of multiple schedulable entities over a time quantum. Both processes and threads can execute in parallel across multiple cores or processors. Concurrency and parallelism are at play on a multi-user system with preemptive time-slicing and multiple processor cores. Affinity scheduling refers to scheduling processes and threads across multiple cores so that their concurrent and parallel execution is close to optimal.

It's worth noting that affinity scheduling refers to the practice of assigning processes or threads to specific processors or cores to optimize their execution and minimize unnecessary context switching. This can improve overall system performance by reducing cache misses and increasing cache hits, among other benefits. In contrast, non-affinity scheduling allows processes and threads to be executed on any available processor or core, which can result in more frequent context switching and lower performance.

Software applications are often designed to solve computationally complex problems. If the algorithm to solve a computationally complex problem can be parallelized, then multiple threads or processes can all run at the same time across multiple cores. Each process or thread executes by itself and does not contend for resources with other threads or processes working on the other parts of the problem to be solved. When each thread or process reaches the point where it can no longer contribute any more work to the solution of the problem, it waits at the barrier if a barrier has been implemented in software. When all threads or processes reach the barrier, their work output is synchronized and often aggregated by the primary process. Complex test frameworks often implement the barrier synchronization problem when certain types of tests can be run in parallel. Most individual software applications running on preemptive, time-sliced, multi-user Linux and UNIX operating systems are not designed with heavy, parallel thread or parallel, multiprocess execution in mind.

Minimizing lock granularity increases concurrency, throughput, and execution efficiency when designing multithreaded and multiprocess software programs. Multithreaded and multiprocess programs that do not correctly utilize synchronization primitives often require countless hours of debugging. The use of semaphores, mutex locks, and other synchronization primitives should be minimized to the maximum extent possible in computer programs that share resources between multiple threads or processes. Proper program design allows schedulable entities to run parallel or concurrently with high throughput and minimum resource contention. This is optimal for solving computationally complex problems on preemptive, time-sliced, multi-user operating systems without requiring hard, real-time scheduling.

Wednesday, February 24, 2021

A hardware design for variable output frequency using an n-bit counter

The DE1-SoC from Terasic is an excellent board for hardware design and prototyping. The following VHDL process is from a hardware design created for the Terasic DE1-SoC FPGA. The ten switches and four buttons on the FPGA are used as an n-bit counter with an adjustable multiplier to increase the output frequency of one or more output pins at a 50% duty cycle.

As the switches are moved or the buttons are pressed, the seven-segment display is updated to reflect the numeric output frequency, and the output pin(s) are driven at the desired frequency. The onboard clock runs at 50MHz, and the signal on the output pins is set on the rising edge of the clock input signal (positive edge-triggered). At 50MHz, the output pins can be toggled at a maximum rate of 50 million cycles per second or 25 million rising edges of the clock per second. An LED attached to one of the output pins would blink 25 million times per second, not recognizable to the human eye. The persistence of vision, which is the time the human eye retains an image after it disappears from view, is approximately 1/16th of a second. Therefore, an LED blinking at 25 million times per second would appear as a continuous light to the human eye.

scaler <= compute_prescaler((to_integer(unsigned( SW )))*scaler_mlt);
gpiopulse_process : process(CLOCK_50, KEY(0))
begin
if (KEY(0) = '0') then -- async reset
count <= 0;
elsif rising_edge(CLOCK_50) then
if (count = scaler - 1) then
state <= not state;
count <= 0;
elsif (count = clk50divider) then -- auto reset
count <= 0;
else
count <= count + 1;
end if;
end if;
end process gpiopulse_process;
The scaler signal is calculated using the compute_prescaler function, which takes the value of a switch (SW) as an input, multiplies it with a multiplier (scaler_mlt), and then converts it to an integer using to_integer. This scaler signal is used to control the frequency of the pulse signal generated on the output pin.

The gpiopulse_process process is triggered by a rising edge of the CLOCK_50 signal and a push-button (KEY(0)) press. It includes an asynchronous reset when KEY(0) is pressed.

The count signal is incremented on each rising edge of the CLOCK_50 signal until it reaches the value of scaler - 1. When this happens, the state signal is inverted and count is reset to 0. If count reaches the value of clk50divider, it is also reset to 0.

Overall, this code generates a pulse signal with a frequency controlled by the value of a switch and a multiplier, which is generated on a specific output pin of the FPGA board. The pulse signal is toggled between two states at a frequency determined by the scaler signal.

It is important to note that concurrent statements within an architecture are executed concurrently, meaning that they are evaluated concurrently and in no particular order. However, the sequential statements within a process are executed sequentially, meaning that they are evaluated in order, one at a time. Processes themselves are executed concurrently with other processes, and each process has its own execution context.