In this article, I will show you how to colorize old black-and-white photographs using the machine learning method Time-Travel Rephotography.
Time-Travel Rephotography
Overview
Many historical people were only ever captured by old, faded, black and white photos, that are distorted due to the limitations of early cameras and the passage of time.
Time-Travel Rephotography simulates traveling back in time with a modern camera to rephotograph famous subjects. Unlike conventional image restoration filters which apply independent operations like denoising, colorization, and superresolution, we leverage the StyleGAN2 framework to project old photos into the space of modern high-resolution photos, achieving all of these effects in a unified framework.
A unique challenge with this approach is retaining the identity and pose of the subject in the original photo, while discarding the many artifacts frequently seen in low-quality antique photos. Our comparisons to current state-of-the-art restoration filters show significant improvements and compelling results for a variety of important historical people.
Please refer to this paper for details.
In this article, we will use the above method to colorize any photo.
Demo(Colaboratory)
Now, let's colorize the photo while actually moving it.
The source code is also described in this article, but you can also get it on GitHub below.
GitHub - Colaboratory demo
You can also open it directly in Google Colaboratory from the following.
Setup environment
Let's set it up. After opening Colaboratory, please set the following and use GPU.
First, get the source code from Github.
%cd /content
!git clone --depth 1 --recurse-submodules --shallow-submodules \
https://github.com/Time-Travel-Rephotography/Time-Travel-Rephotography.github.io.git Time-Travel-Rephotography
# for face align
!git clone https://github.com/adamian98/pulse.git
Then install the library.
%cd /content/Time-Travel-Rephotography
!pip3 install -r requirements.txt
!pip3 install --upgrade gdown
Finally, import the library.
%cd /content/Time-Travel-Rephotography
from pathlib import Path
import os
from PIL import Image
from IPython.display import display
from argparse import Namespace
from projector import (
ProjectorArguments,
main,
)
This completes the environment setup.
Trained model setup
Next, we will download the trained model.
%cd /content/Time-Travel-Rephotography
!mkdir -p ./checkpoint/encoder
if not os.path.exists('checkpoint/e4e_ffhq_encode.pt'):
!gdown https://drive.google.com/uc?id=1YLB-3ZCv6FRAWwCHkTvUou_CAvDcb5Pr -O checkpoint/e4e_ffhq_encode.pt
if not os.path.exists('checkpoint/stylegan2-ffhq-config-f.pt'):
!gdown https://drive.google.com/uc?id=1aSnTVHGNzQ-Eh5rNDOrOc_4SjQbNMq3B -O checkpoint/stylegan2-ffhq-config-f.pt
if not os.path.exists('checkpoint/vgg_face_dag.pt'):
!gdown https://drive.google.com/uc?id=12BHlsSQM0D8KyXprIc7WmJRLWBWdtF56 -O checkpoint/vgg_face_dag.pt
if not os.path.exists('checkpoint/checkpoint_b.pt'):
!gdown https://drive.google.com/uc?id=1fXBCyBKNEZfeiI6LEHTEnUg_DjWHJy5r -O checkpoint/encoder/checkpoint_b.pt
if not os.path.exists('checkpoint/checkpoint_g.pt'):
!gdown https://drive.google.com/uc?id=1YnQEPf7FyfZxAVQg-CpbvdF4otZeTOZh -O checkpoint/encoder/checkpoint_g.pt
if not os.path.exists('checkpoint/checkpoint_gb.pt'):
!gdown https://drive.google.com/uc?id=1kelQK3pUdeHwu8uVE7A4WZH_EjHxGE6v -O checkpoint/encoder/checkpoint_gb.pt
%cd /content/Time-Travel-Rephotography/third_party/face_parsing
!mkdir -p res/cp
if not os.path.exists('checkpoint/res/cp/79999_iter.pth'):
!gdown https://drive.google.com/uc?id=1vwm4BcAKISQgcJLvTUcvesk73UIYdDMF -O res/cp/79999_iter.pth
There are many models, but the download will be completed in a minute or two.
Test image setup
Next, download any image you want to colorize.
This time, use wget to download and use a black and white image under the Google Colaboratory environment.
%cd /content/Time-Travel-Rephotography
!mkdir test_imgs
!wget https://www.allcinema.net/img/6/6476/p_40178_01_01_02.jpg -O ./test_imgs/test.jpg
Align the face part from the image.
%cd /content/pulse/
!python align_face.py \
-input_dir /content/Time-Travel-Rephotography/test_imgs \
-output_dir /content/Time-Travel-Rephotography/test_aligns \
-output_size 512 \
-seed 12 \
The input image is as follows.

Colorization
Then, we will carry out colorization.
Better results can be obtained by switching spectral_sensitivity depending on the type of black and white photo.
#@markdown 'b' (blue-sensitive), 'gb' (orthochromatic), 'g' (panchromatic)
spectral_sensitivity = "g" # @param ["b", "gb", "g"]
#@markdown Estimated blur radius of the input photo
gaussian_radius = 0.75 # @param {type:"number"}
args = ProjectorArguments().parse(
args=[str(input_path)],
namespace=Namespace(
spectral_sensitivity=spectral_sensitivity,
encoder_ckpt=f"checkpoint/encoder/checkpoint_{spectral_sensitivity}.pt",
encoder_name=spectral_sensitivity,
gaussian=gaussian_radius,
log_visual_freq=1000,
log_dir="log/",
results_dir="results/"
))
main(args)
Display the generated image.
def get_concat_h_multi_resize(im_list, resample=Image.BICUBIC):
min_height = min(im.height for im in im_list)
im_list_resize = [im.resize((int(im.width * min_height / im.height), min_height),resample=resample)
for im in im_list]
total_width = sum(im.width for im in im_list_resize)
dst = Image.new('RGB', (total_width, min_height))
pos_x = 0
for im in im_list_resize:
dst.paste(im, (pos_x, 0))
pos_x += im.width
return dst
im1 = Image.open(f"/content/Time-Travel-Rephotography/results/test_0_{spectral_sensitivity}.png")
im2 = Image.open('/content/Time-Travel-Rephotography/results/test/skin_mask/face_parsing/input.jpg')
im3 = Image.open(f"/content/Time-Travel-Rephotography/results/test_0-{spectral_sensitivity}-G{gaussian_radius}-init(10,18)-s256-vgg1-vggface0.3-eye0.1-color1.0e+10-cx0.1(relu3_4,relu2_2,relu1_2)-NR5.0e+04-lr0.1_0.01-c32-wp(250,750).png")
concat_img = get_concat_h_multi_resize([im1, im2, im3])
display(concat_img)
The output result is as follows.
Summary
In this article, I introduced how to colorize old black-and-white photographs using the machine learning method Time-Travel Rephotography.
I sometimes feel instability due to the use of GAN, but it produces natural color photographs.
References
1.
Paper - Time-Travel Rephotography
2. GitHub - Time-Travel-Rephotography.github.io
0 件のコメント :
コメントを投稿