this is not institutional research. it is simply me, closely observing my own life. the hyperspectral system i use here i built myself, and i operate it exclusively in my personal space at a microscopic level, on things that are directly in front of me: my plants, my food, my everyday surroundings. the objects I analyze are within two feet of me, yet I capture images with over 20 megapixels for microscopic analysis.
Transformer models are a powerful tool for performing image super-resolution. A valid input sequence is critical and loading the weights into the model for training requires a delicate balance of properly selected training data, code validation, applicable modifications, and finely tuned model parameters. It is necessary to achieve deterministic output through the neural network for ensuring valid output. A properly balanced output sequence from a transformer model can be used as input for a hyperspectral reconstruction and classification model. Subsequent weights are carefully loaded into the reconstruction and classification model, and the parameters are again finely tuned, inline with input sequence size.
A length, n, of hyperspectral cubes, with dimensions, m, can then be constructed via extrapolation. The net result is a super high or ultra res image where the wavelength of light can be measured against each pixel. Paired with the proper training data, topological 3d maps can be created of surfaces.
My recent research has been focused on this. I have built numerous models and written some of the code. I am interested in taking this to the next level. I have utilized the competition code for some of the models, modified it, and executed it with good results.
If there are any individuals interested in expanding their research in this space, optimizing model and neural network code, or building out enhancements and conducting advanced image processing techniques, please let me know.
I am currently working with the following -