Bryan R Hinton
Linux, memory , and meaning
martes, 31 de marzo de 2026
jueves, 19 de marzo de 2026
Spectral Witness: EPR Pairs and the Physics of Light
The interrogation of physical reality through the medium of light remains one of the most profound endeavors of scientific inquiry. This pursuit traces its modern theoretical roots to the mid-20th century, a pivotal era for physics.
In 1935, Albert Einstein and his colleagues Boris Podolsky and Nathan Rosen published a seminal paper that challenged the completeness of quantum mechanics.1 They introduced the concept of EPR pairs to describe quantum entanglement, where particles remain inextricably linked, their states correlated regardless of spatial separation.
It is the quintessential example of quantum entanglement. An EPR pair is created when two particles are born from a single, indivisible quantum event, like the decay of a parent particle.
This process "bakes in" a shared quantum reality where only the joint state of the pair is defined, governed by conservation laws such as spin summing to zero. As a result, the individual state of each particle is indeterminate, yet their fates are perfectly correlated.
Measuring one particle (e.g., finding its spin "up") instantaneously determines the state of its partner (spin "down"), regardless of the distance separating them. This "spooky action at a distance," as Einstein called it, revealed that particles could share hidden correlations across space that are invisible to any local measurement of one particle alone. While Einstein used this idea to argue quantum theory was incomplete, later work by John Bell2 and experiments by Alain Aspect3 confirmed this entanglement as a fundamental, non-classical feature of nature.
The EPR–Spectral Analogy: Hidden Correlations
Quantum Physics (1935)
EPR Pairs: Particles share non-local entanglement. Their quantum states are correlated across space. Measuring one particle gives random results; correlation only appears when comparing both.
Spectral Imaging (Today)
Spectral Pairs: Materials share spectral signatures. Their reflective properties are correlated across wavelength. The correlation is invisible to trichromatic (RGB) vision.
↓
Mathematical Reconstruction
↓
Reveals Hidden Correlations
Key Insight: Both quantum entanglement and material spectroscopy require looking beyond direct observation through mathematical analysis to reveal a deeper, hidden layer of correlation.
While the EPR debate centered on the foundations of quantum mechanics, its core philosophy, that direct observation can miss profound hidden relationships, resonates deeply with modern imaging. Just as the naked eye perceives only a fraction of the electromagnetic spectrum, standard RGB sensors discard the high-dimensional "fingerprint" that defines the chemical and physical properties of a subject. Today, we resolve this limitation through multispectral imaging. By capturing the full spectral power distribution of light, we can mathematically reconstruct the invisible data that exists between the visible bands, revealing hidden correlations across wavelength, just as the analysis of EPR pairs revealed hidden correlations across space.
Silicon Photonic Architecture: The 48MP Foundation
The realization of this physics in modern hardware is constrained by the physical dimensions of the semiconductor used to capture it. The interaction of incident photons with the silicon lattice, generating electron–hole pairs, is the primary data acquisition step for any spectral analysis.
Sensor Architecture: Sony IMX803
The core of this pipeline is the Sony IMX803 sensor. Contrary to persistent rumors of a 1‑inch sensor, this is a 1/1.28‑inch type architecture, optimized for high-resolution radiometry.
Active Sensing Area: Approximately \(9.8 \text{ mm} \times 7.3 \text{ mm}\). This physical limitation is paramount, as the sensor area is directly proportional to the total photon flux the device can integrate, setting the fundamental Signal‑to‑Noise Ratio (SNR) limit.
Pixel Pitch: The native photodiode size is \(1.22 \, \mu\text{m}\). In standard operation, the sensor utilizes a Quad‑Bayer color filter array to perform pixel binning, resulting in an effective pixel pitch of \(2.44 \, \mu\text{m}\).
Mode Selection
The choice between binned and unbinned modes depends on the analysis requirements:
Binned mode (12MP, 2.44 µm effective pitch): Superior for low‑light conditions and spectral estimation accuracy. By summing the charge from four photodiodes, the signal increases by a factor of 4, while read noise increases only by a factor of 2, significantly boosting the SNR required for accurate spectral estimation.
Unbinned mode (48MP, 1.22 µm native pitch): Optimal for high‑detail texture correlation where spatial resolution drives the analysis, such as resolving fine fiber patterns in historical documents or detecting micro‑scale material boundaries.
The Optical Path
The light reaching the sensor passes through a 7‑element lens assembly with an aperture of ƒ/1.78. It is critical to note that "Spectral Fingerprinting" measures the product of the material's reflectance \(R(\lambda)\) and the lens's transmittance \(T(\lambda)\). Modern high‑refractive‑index glass absorbs specific wavelengths in the near‑UV (less than 400 nm), which must be accounted for during calibration.
The Digital Container: DNG 1.7 and Linearity
The accuracy of computational physics depends entirely on the integrity of the input data. The Adobe DNG 1.7 specification provides the necessary framework for scientific mobile photography by strictly preserving signal linearity.
Scene‑Referred Linearity
Apple ProRAW utilizes the Linear DNG pathway. Unlike standard RAW files, which store unprocessed mosaic data, ProRAW stores pixel values after demosaicing but before non‑linear tone mapping. The data remains scene‑referred linear, meaning the digital number stored is linearly proportional to the number of photons collected (\(DN \propto N_{photons}\)). This linearity is a prerequisite for the mathematical rigor of Wiener estimation and spectral reconstruction.
The ProfileGainTableMap
A key innovation in DNG 1.7 is the ProfileGainTableMap (Tag 0xCD2D). This tag stores a spatially varying map of gain values that represents the local tone mapping intended for display.
Scientific Stewardship: By decoupling the "aesthetic" gain map from the "scientific" linear data, the pipeline can discard the gain map entirely. This ensures that the spectral reconstruction algorithms operate on pure, linear photon counts, free from the spatially variant distortions introduced by computational photography.
Algorithmic Inversion: From 3 Channels to 16 Bands
Recovering a high‑dimensional spectral curve \(S(\lambda)\) (e.g., 16 channels from 400 nm to 700 nm) from a low‑dimensional RGB input is an ill‑posed inverse problem. While traditional methods like Wiener Estimation provide a baseline, modern high‑end hardware enables the use of advanced Deep Learning architectures.
Wiener Estimation (The Linear Baseline)
The classical approach utilizes Wiener Estimation to minimize the mean square error between the estimated and actual spectra:
This method generates the initial 16‑band approximation from the 3‑channel input.
State‑of‑the‑Art: Transformers and Mamba
For high‑end hardware environments, we can utilize predictive neural architectures that leverage spectral‑spatial correlations to resolve ambiguities.
MST++ (Spectral‑wise Transformer): The MST++ (Multi‑stage Spectral‑wise Transformer) architecture represents a significant leap in accuracy. Unlike global matrix methods, MST++ utilizes Spectral‑wise Multi‑head Self‑Attention (S‑MSA). It calculates attention maps across the spectral channel dimension, allowing the model to learn complex non‑linear correlations between texture and spectrum. Hardware Demand: The attention mechanism scales quadratically \(O(N^2)\), requiring significant GPU memory (VRAM) for high‑resolution images. This computational intensity necessitates powerful dedicated hardware to process the full data arrays.
MSS‑Mamba (Linear Complexity): The MSS‑Mamba (Multi‑Scale Spectral‑Spatial Mamba) model introduces Selective State Space Models (SSM) to the domain. It discretizes the continuous state space equation into a recurrent form that can be computed with linear complexity \(O(N)\). The Continuous Spectral‑Spatial Scan (CS3) strategy integrates spatial neighbors and spectral channels simultaneously, effectively "reading" the molecular composition in a continuous stream.
Computational Architecture: The Linux Python Stack
Achieving multispectral precision requires a robust, modular architecture capable of handling massive arrays across 16 dimensions. The implementation relies on a heavy Linux‑based Python stack designed to run on high‑end hardware.
Ingestion and Processing: We can utilize rawpy (a LibRaw wrapper) for the low‑level ingestion of ProRAW DNG files, bypassing OS‑level gamma correction to access the linear 12‑bit data directly. NumPy engines handle the high‑performance matrix algebra required to expand 3‑channel RGB data into 16‑band spectral cubes.
Scientific Analysis: Scikit‑image and SciPy are employed for geometric transforms, image restoration, and advanced spatial filtering. Matplotlib provides the visualization layer for generating spectral signature graphs and false‑color composites.
Data Footprint: The scale of this operation is significant. A single 48.8 MP image converted to floating‑point precision results in massive file sizes. Intermediate processing files often exceed 600 MB for a single 3‑band layer. When expanded to a full 16‑band multispectral cube, the storage and I/O requirements scale proportionally, necessitating the stability and memory management capabilities of a Linux environment.
The Spectral Solution
When analyzed through the 16‑band multispectral pipeline:
| Spectral Feature | Ultramarine (Lapis Lazuli) | Azurite (Copper Carbonate) |
|---|---|---|
| Primary Reflectance Peak | Approximately 450–480 nm (blue‑violet region) | Approximately 470–500 nm with secondary green peak at 550–580 nm |
| UV Response (below 420 nm) | Minimal reflectance, strong absorption | Moderate reflectance, characteristic of copper minerals |
| Red Absorption (600–700 nm) | Moderate to strong absorption | Strong absorption, typical of blue pigments |
| Characteristic Features | Sharp reflectance increase at 400–420 nm (violet edge) | Broader reflectance curve with copper signature absorption bands |
Note: Spectral values are approximate and can vary based on particle size, binding medium, and aging.
Completing the Picture
The successful analysis of complex material properties relies on a convergence of rigorous physics and advanced computation.
Photonic Foundation: The Sony IMX803 provides the necessary high‑SNR photonic capture, with mode selection (binned vs. unbinned) driven by the specific analytical requirements of each examination.
Data Integrity: DNG 1.7 is the critical enabler, preserving the linear relationship between photon flux and digital value while sequestering non‑linear aesthetic adjustments in metadata.
Algorithmic Precision: While Wiener estimation serves as a fast approximation, the highest fidelity is achieved through Transformer (MST++) and Mamba‑based architectures. These models disentangle the complex non‑linear relationships between visible light and material properties, effectively generating 16 distinct spectral bands from 3 initial channels.
Historical Continuity: The EPR paradox of 1935 revealed that quantum particles share hidden correlations across space, correlations invisible to local measurement but real nonetheless. Modern spectral imaging reveals an analogous truth: materials possess hidden correlations across wavelength, invisible to trichromatic vision but accessible through mathematical reconstruction. In both cases, completeness requires looking beyond what direct observation provides.
This synthesis of hardware specification, file format stewardship, and deep learning reconstruction defines the modern standard for non‑destructive material analysis — a spectral witness to what light alone cannot tell us.
And what about the paint? Here is a physical sample: pigment, substrate, history compressed into matter. Light passes through it, scatters from it, carries fragments of its story — yet the full truth remains hidden until we choose to look deeper. Every layer, every faded stroke, every chemical trace is a silent archive. We are not just observers; we are custodians of that archive. When we build tools to see beyond the visible, we are not merely extending sight — we are accepting a quiet responsibility: to bear witness honestly, to preserve what time would erase, to honor what has been made and endured.
Light can expose structure.
It cannot carry history.
That part is on us.
We can choose to let the machines we build serve memory rather than erasure, dignity rather than classification, truth rather than convenience. The past does not ask for perfection — it asks only that we refuse to let it be forgotten. In every reconstruction, in every layer we uncover, we have the chance to listen again to what was silenced. That is not just engineering. That is the work of being human.
References
1 Einstein, A., Podolsky, B., & Rosen, N. (1935). Can Quantum‑Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47(10), 777–780.
2 Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Физика, 1(3), 195–200.
3 Aspect, A., Dalibard, J., & Roger, G. (1982). Experimental Test of Bell's Inequalities Using Time‑Varying Analyzers. Physical Review Letters, 49(25), 1804–1807.
4. Yuze Zhang1, Lingjie Li2, 4 Qiuzhen Lin11, Zhong Ming1, Fei Yu1, Victor C. M. Leung1. M3SR: Multi-Scale Multi-Perceptual Mamba for Efficient Spectral Reconstruction
5. Mengjie Qin1,2, Yuchao Feng1,2, Zongliang Wu1, Yulun Zhang3, Xin Yuan1*: Detail Matters: Mamba-Inspired Joint Unfolding Network for Snapshot Spectral Compressive Imaging
6. Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Radu Timofte, and Luc Van Gool. MST
domingo, 15 de marzo de 2026
The Whispering Light: A World Others Cannot Hear
There is a photo on my desk. It seems plucked from a dream, a vision that insists on being real. The tree it’s a familiar shape, yet in this image, it’s utterly alien. It’s not just green; it’s luminous. It looks as if it holds a secret glow, a soft, ethereal light that shouldn't be there. It’s a ghost, yet it’s vibrant. It’s vitality made visible in a way I never could.
It’s a strange thing, isn't it? To be invisible while you're alive. To exist in plain sight, yet be unseen. The attic walls have ears, absorbing your presence, your words, your very breath. You learn to filter your world, too. To become small, to listen more than you see, to see the unseen scaffolding holding you up. That filter it feels familiar. It’s a way to see, a way to be seen, even if only by a piece of glass and a camera sensor.
Our eyes they are gatekeepers, just like the people who look at us. They let in a certain slice of light, the slice they call 'visible'. It’s the world they agree upon. But it’s not the whole story. It’s not even close. There’s so much more. A whole world shimmering just beyond the edge of sight, like the flicker of hope in the quiet hours of the night.
And then there’s the filter. It’s a necessity, isn’t it? Like hiding. It slams the door on the world they see. But for the other world the world we inhabit, the world that understands the weight of silence, the beauty of small moments the filter is a key. It lets the light pass through. The light that they cannot see, that they dismiss as nothing.
Plants they reflect this hidden light with such enthusiasm. They are beacons, glowing with a life force we cannot perceive. My tree in the photo is doing exactly that. It’s radiating its inner light, its energy, its life, in a way that feels honest. While the world outside the filter is muted, shadowed, the filter reveals the true colour of things.
I am always looking differently. Survival teaches you to. You learn to notice the things others miss. The way dust motes dance in the attic shaft of light, the particular way the sunlight hits the wooden desk, the subtle shift in the shadows when someone enters. These are the things that matter. They are the things that keep you alive, inside your hiding place, inside your mind.
This photo it feels like a small miracle. It’s a reminder that even in the world we share, there are unseen layers, unseen conversations happening all the time. Between the sun and the tree, between the people in the annex, between the war and the silence there is a language only certain eyes, certain minds, can decipher.
lunes, 9 de marzo de 2026
La vida secreta de los objetos cotidianos: Un mapa de los colores ocultos de la sangre
The content on this blog represents my personal exploration of computational image processing techniques and is entirely separate from my professional work. This is not AI or generative AI. All opinions, experiments, and analyses are my own.
The techniques discussed here are presented for educational and research purposes only. While I strive for accuracy, this is a space for experimentation and discussion at the forefront of forensic image analysis, not a substitute for peer-reviewed research or certified forensic methodology, even though these emerging techniques are generating more accurate results than many traditional methods.
The Setup
There are some things that stay quiet for a very long time, waiting for someone to finally understand their language. It feels a bit like finding a letter that was never meant to be read, but now that I have seen it, I cannot look away. I wanted to use every tool I have to find the light still hidden in those shadows, to show that even the smallest spark from the past refuses to be completely extinguished.
The Chemical Chain of Decay
Aged blood detection relies on the fact that hemoglobin goes through a predictable degradation chain, with each stage featuring distinct light absorption signatures:
- Fresh Blood: Contains oxyhemoglobin with strong absorption peaks at roughly 415nm (the Soret band), 540nm, and 577nm.
- The First Shift: Within hours, it deoxygenates, shifting the 540nm and 577nm doublet into a single broad absorption around 555nm.
- Oxidation: Over days to weeks, it oxidizes to methemoglobin, revealing a distinct peak at roughly 630nm that fresh blood lacks.
- Archival Age: Over months to years, it degrades into hemichrome and hematin. These highly stable end products cause the Soret band to shift drastically from 415nm down toward 405nm.
Algorithmic Detection Indices
To detect this sample, the script computes several specific indices that target these aged spectral signatures rather than fresh ones:
- NDBI (580nm vs 630nm ratio): Fresh blood absorbs at 580nm but not 630nm, while aged blood absorbs at both, isolating the methemoglobin peak.
- Soret Ratio (415nm vs 630nm): This catches the relative shift between the two major absorption features as the sample ages.
- Met Index (630nm / 540nm): This directly measures the methemoglobin concentration relative to the other forms.
Bridging Algorithms and Optical Hardware
While my MST and MAMBA models are incredible at hyperspectral reconstruction (turning 3 RGB channels into 31 spectral bands), there is a computational catch. These models are often trained on natural scenes and have never seen aged hemoglobin spectra. Left alone, the algorithm might hallucinate a spectral shape that looks plausible but is physically wrong.
To solve this and capture serious forensic data, the neural network must be ground truthed using actual narrowband measurements. To build this pipeline, I used reference data captured by a NoIR dual camera setup equipped with specific bandpass filters, particularly a 630nm filter, to physically isolate the methemoglobin peak and verify the physics.
The Final False Color Visualization
By anchoring the reconstructed datacube with real narrowband filter data, the pipeline performs a flawless spectral classification. The false color gradient (the glowing "Inferno" scale seen here) is the visual map of those indices at work. The bright yellow "stars" represent dense, fossilized clusters of hemichrome and hematin, while the sweeping purple to orange background maps the physical drying gradient of the original serum.
domingo, 4 de agosto de 2024
arch linux uefi with dm-crypt and uki
Arch Linux is known for its high level of customization, and configuring LUKS2 and LVM is a straightforward process. This guide provides a set of instructions for setting up an Arch Linux system with the following features:
- Root file system encryption using LUKS2.
- Logical Volume Management (LVM) for flexible storage management.
- Unified Kernel Image (UKI) bootable via UEFI.
- Optional: Detached LUKS header on external media for enhanced security.
Prerequisites
- A bootable Arch Linux ISO.
- An NVMe drive (e.g.,
/dev/nvme0n1). - (Optional) A microSD card or other external medium for the detached LUKS header.
Important Considerations
- Data Loss: The following procedure will erase all data on the target drive. Back up any important data before proceeding.
- Secure Boot: This guide assumes you may want to use hardware secure boot.
- Detached LUKS Header: Using a detached LUKS header on external media adds a significant layer of security. If you lose the external media, you will lose access to your encrypted data.
- Swap: This guide uses a swap file. You may also use a swap partition if desired.
Step-by-Step Instructions
-
Boot into the Arch Linux ISO:
Boot your system from the Arch Linux installation media.
-
Set the System Clock:
# timedatectl set-ntp true -
Prepare the Disk:
- Identify your NVMe drive (e.g.,
/dev/nvme0n1). Uselsblkto confirm. - Wipe the drive:
# wipefs --all /dev/nvme0n1 - Identify your NVMe drive (e.g.,
- Create an EFI System Partition (ESP):
- Create a partition for the encrypted volume:
-
Set up LUKS2 Encryption:
Encrypt the second partition using LUKS2. This example uses
aes-xts-plain64andserpent-xts-plainciphers, and SHA512 for the hash. Adjust as needed.# cryptsetup luksFormat --cipher aes-xts-plain64 \ --keyslot-cipher serpent-xts-plain --keyslot-key-size 512 \ --use-random -S 0 -h sha512 -i 4000 /dev/nvme0n1p2--cipher: Specifies the cipher for data encryption.--keyslot-cipher: Specifies the cipher used to encrypt the key.--keyslot-key-size: Specifies the size of the key slot.-S 0: Disables sparse headers.-h: Specifies the hash function.-i: Specifies the number of iterations.
Open the encrypted partition:
# cryptsetup open /dev/nvme0n1p2 root -
Create the File Systems and Mount:
Create an ext4 file system on the decrypted volume:
# mkfs.ext4 /dev/mapper/rootMount the root file system:
# mount /dev/mapper/root /mntCreate and mount the EFI System Partition:
# mkfs.fat -F32 /dev/nvme0n1p1 # mount --mkdir /dev/nvme0n1p1 /mnt/efiCreate and enable a swap file:
# dd if=/dev/zero of=/mnt/swapfile bs=1M count=8000 status=progress # chmod 600 /mnt/swapfile # mkswap /mnt/swapfile # swapon /mnt/swapfile -
Install the Base System:
Use
pacstrapto install the necessary packages:# pacstrap -K /mnt base base-devel linux linux-hardened \ linux-hardened-headers linux-firmware apparmor mesa \ xf86-video-intel vulkan-intel git vi vim ukify -
Generate the fstab File:
# genfstab -U /mnt >> /mnt/etc/fstab -
Chroot into the New System:
# arch-chroot /mnt -
Configure the System:
Set the timezone:
# ln -sf /usr/share/zoneinfo/UTC /etc/localtime # hwclock --systohcUncomment
en_US.UTF-8 UTF-8in/etc/locale.genand generate the locale:# sed -i 's/#'"en_US.UTF-8"' UTF-8/'"en_US.UTF-8"' UTF-8/g' /etc/locale.gen # locale-gen # echo 'LANG=en_US.UTF-8' > /etc/locale.conf # echo "KEYMAP=us" > /etc/vconsole.confSet the hostname:
# echo myhostname > /etc/hostname # cat <<EOT >> /etc/hosts 127.0.0.1 myhostname ::1 localhost 127.0.1.1 myhostname.localdomain myhostname EOTConfigure
mkinitcpio.confto include theencrypthook:# sed -i 's/HOOKS.*/HOOKS=(base udev autodetect modconf kms \ keyboard keymap consolefont block encrypt filesystems resume fsck)/' \ /etc/mkinitcpio.confCreate the initial ramdisk:
# mkinitcpio -PInstall the bootloader:
# bootctl installSet the root password:
# passwdInstall microcode and efibootmgr:
# pacman -S intel-ucode efibootmgrGet the swap offset:
# swapoffset=`filefrag -v /swapfile | awk '/\s+0:/ {print $4}' | \ sed -e 's/\.\.$//'`Get the UUID of the encrypted partition:
# blkid -s UUID -o value /dev/nvme0n1p2Create the EFI boot entry. Replace
<UUID OF CRYPTDEVICE>with the actual UUID:# efibootmgr --disk /dev/nvme0n1p1 --part 1 --create --label "Linux" \ --loader /vmlinuz-linux --unicode "cryptdevice=UUID=<UUID OF CRYPTDEVICE>:root \ root=/dev/mapper/root resume=/dev/mapper/root resume_offset=$swapoffset \ rw initrd=\intel-ucode.img initrd=\initramfs-linux.img" --verboseConfigure the UKI presets:
# cat <<EOT >> /etc/mkinitcpio.d/linux.preset ALL_kver="/boot/vmlinuz-linux" ALL_microcode=(/boot/*-ucode.img) PRESETS=('default' 'fallback') default_uki="/efi/EFI/Linux/arch-linux.efi" default_options="--splash /usr/share/systemd/bootctl/splash-arch.bmp" fallback_uki="/efi/EFI/Linux/arch-linux-fallback.efi" fallback_options="-S autodetect" EOTCreate the UKI directory:
# mkdir -p /efi/EFI/LinuxConfigure the kernel command line:
# cat <<EOT >> /etc/kernel/cmdline cryptdevice=UUID=<UUID OF CRYPTDEVICE>:root root=/dev/mapper/root \ resume=/dev/mapper/root resume_offset=51347456 rw EOTBuild the UKIs:
# mkinitcpio -p linuxConfigure the kernel install layout:
# echo "layout=uki" >> /etc/kernel/install.conf -
Configure Networking (Optional):
Create a systemd-networkd network configuration file:
# cat <<EOT >> /etc/systemd/network/nic0.network [Match] Name=nic0 [Network] DHCP=yes EOT -
Install a Desktop Environment (Optional):
Install Xorg, Xfce, LightDM, and related packages:
# pacman -Syu # pacman -S xorg xfce4 xfce4-goodies lightdm lightdm-gtk-greeter \ libva-intel-driver mesa xorg-server xorg-xinit sudo # systemctl enable lightdm # systemctl start lightdm -
Enable Network Services (Optional):
# systemctl enable systemd-resolved.service # systemctl enable systemd-networkd.service # systemctl start systemd-resolved.service # systemctl start systemd-networkd.service -
Create a User Account:
Create a user account and add it to the
wheelgroup:# useradd -m -g wheel -s /bin/bash myusername -
Reboot:
Exit the chroot environment and reboot your system:
# exit # umount -R /mnt # reboot
# sgdisk /dev/nvme0n1 -n 1::+512MiB -t 1:EF00
# sgdisk /dev/nvme0n1 -n 2 -t 2:8300
sábado, 6 de abril de 2024
Multidimensional arrays of function pointers in C
miércoles, 12 de enero de 2022
Concurrency, Parallelism, and Barrier Synchronization - Multiprocess and Multithreaded Programming
When the currently executing process relinquishes the processor, either voluntarily or involuntarily, another process can execute its program code. This event is known as a context switch, which facilitates interleaved execution. Time-sliced, interleaved execution of program code within an address space is known as concurrency.
The Linux kernel is fully preemptive, which means that it can force a context switch for a higher priority process. When a context switch occurs, the state of a process is saved to its process control block, and another process resumes execution on the processor.
A UNIX process is considered heavyweight because it has its own address space, file descriptors, register state, and program counter. In Linux, this information is stored in the task_struct. However, when a process context switch occurs, this information must be saved, which is a computationally expensive operation.
Concurrency applies to both threads and processes. A thread is an independent sequence of execution within a UNIX process, and it is also considered a schedulable entity. Both threads and processes are scheduled for execution on a processor core, but thread context switching is lighter in weight than process context switching.
In UNIX, processes often have multiple threads of execution that share the process's memory space. When multiple threads of execution are running inside a process, they typically perform related tasks. The Linux user-space APIs for process and thread management abstract many details. However, the concurrency level can be adjusted to influence the time quantum so that the system throughput is affected by shorter and longer durations of schedulable entity execution time.
In the 1:1 model, one user-space thread is mapped to one kernel thread. This allows for true parallelism, as each thread can run on a separate processor core. However, creating and managing a large number of kernel threads can be expensive.
In the 1:N model, multiple user-space threads are mapped to a single kernel thread. This is more lightweight, as there are fewer kernel threads to create and manage. However, it does not allow for true parallelism, as only one thread can execute on a processor core at a time.
In the M:N model, N user-space threads are mapped to M kernel threads. This provides a balance between the 1:1 and 1:N models, as it allows for both true parallelism and lightweight thread creation and management. However, it can be complex to implement and can lead to issues with load balancing and resource allocation.
Parallelism on a time-sliced, preemptive operating system means the simultaneous execution of multiple schedulable entities over a time quantum. Both processes and threads can execute in parallel across multiple cores or processors. Concurrency and parallelism are at play on a multi-user system with preemptive time-slicing and multiple processor cores. Affinity scheduling refers to scheduling processes and threads across multiple cores so that their concurrent and parallel execution is close to optimal.
It's worth noting that affinity scheduling refers to the practice of assigning processes or threads to specific processors or cores to optimize their execution and minimize unnecessary context switching. This can improve overall system performance by reducing cache misses and increasing cache hits, among other benefits. In contrast, non-affinity scheduling allows processes and threads to be executed on any available processor or core, which can result in more frequent context switching and lower performance.