Over a decade of experience converting all forms of media into digital formats. Color correction, stabilization, media repair, and parallel scanning.

Student Innovation Project

Philmo Film Scanner

Innovation Claim:

                The Philmo Film Scanner will be innovative in utilizing new technology to reduce the cost of film scanning in both hardware costs as well as labor costs in usage. The goal of the solution is also to allow for massively expanded accessibility to both the usage of and construction of film scanners from now into the future, with the assistance of film libraries as well as fellow scanner manufacturers.

By utilizing off the shelf and 3d printed parts with an open source software, anyone from hobbyists to professionals can scan 8-16mm films in both traditional and widescreen formats. Future upgrades include magnetic strip reading. AI will be utilized to assist with color correction of faded films and minor damage correction with minimum manual work.

90% of the innovation is actually an invisible software layer. It’s primary job is to save time on film fault correction. An additional benefit is to protect the rights of film holders. This is intended to both protect and promote copyright issues. In one way it prevents unintentional distribution of copyrighted material. In another it can spot a lost film and a potential profit generator for finding lost material. Re-releases can also “call back” on unintentional product placement for income on advertising revenue.

This is the current loose framework of how the software all works together, parallel processing is indicated with a circle symbol. Of course there is no symbol for multiple systems in regular usage. So I’ve just made things up, that make sense. I’ve actually accomplished about 80% of the code here. And have simulated color repair using stable diffusion with A1111, but it isn’t ideal. Instead I am going to shift to using HistoGan.

This is the general diagram showing how StyleGan was modified into HistoGan with a very few add ons.

This is the configuration that will be most useful to my project. This shows the skip connection system that bypasses the “created image” of the GAN architecture and re-injects the original image into the latent image. This allows the GAN to generate missing image information such as a faded green channel. The generated image could contain grass that is green and a sky that is blue, where the original image did not. In the final output the model now knows to replace missing colors in a way that no current automatic process can do. This is traditionally done via semi frame-by-frame hand coloring by experts who are both technological and artistic, and tools that can carry this recolor to following frames. This is extremely time consuming and there are very few people who are good at both the software and the artistic sense of “not going too far”.

This is the mechanical progress. Although not the focus of the project, getting film images into a computer is expensive. The cheapest solution is genuinely my own. Progress is great, and other components, not show, are already operational. Including camera, diffuse multispectral light source, and basic capture software, including histograms.