勉強しないとな~blog

ちゃんと勉強せねば…な電気設計エンジニアです。

OpenCVやってみる-38. 処理の調整2

春のパン祭り点数文字認識処理を調整していますが、なかなか完璧にはいかず…

調整は今回までにして、実際のアプリケーションを作っていく方向に進めたいと思います。

今回変更したこと

色々調整していて、以下変更しました。

  • 数字テンプレートとする輪郭データを、cv2.minAreaRect()での角度を元にまっすぐに直しておく。(効果はあまり出なかったが、処理を整理できたので、この変更版処理を使っておきたい)
  • ICP処理の中で、最近傍点探索処理があったが、この高速化を行った。
  • 点数文字認識処理の区切り方を修正した。
  • 一致度計算では、cv2.matchTemplate()関数を使っていたが、2つのサイズの一致した2値画像を比較したい、というだけなので、それに合わせた処理に変更。パフォーマンスは確認していないが、速くなっているんではないかと。

今まで作成した処理

色々と変更しながらやってきたので、現状のバージョンを以下並べていきます。

ライブラリインポート、画像データ読み込み

import cv2
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import math
import copy
import random

img1 = cv2.imread('harupan_190428_1.jpg')
img2 = cv2.imread('harupan_190428_2.jpg')
img3 = cv2.imread('harupan_200317_1.jpg')
img4 = cv2.imread('harupan_210227_2.jpg')
img5 = cv2.imread('harupan_210402_1.jpg')
img6 = cv2.imread('harupan_210402_2.jpg')
img7 = cv2.imread('harupan_210414_1.jpg')

点数文字候補輪郭の検出

def detect_candidate_contours(image, res_th=800):
    h, w, chs = image.shape
    if h > res_th or w > res_th:
        k = float(res_th)/h if w > h else float(res_th)/w
    else:
        k = 1.0
    img = cv2.resize(image, None, fx=k, fy=k, interpolation=cv2.INTER_AREA)
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    # Convert hue value (rotation, mask by saturation)
    hsv[:,:,0] = np.where(hsv[:,:,0] < 50, hsv[:,:,0]+180, hsv[:,:,0])
    hsv[:,:,0] = np.where(hsv[:,:,1] < 100, 0, hsv[:,:,0])
    # Thresholding with cv2.inRange()
    th_hue = cv2.inRange(hsv[:,:,0], 135, 190)
    # Retrieve all points on the contours (cv2.CHAIN_APPROX_NONE)
    contours, hierarchy = cv2.findContours(th_hue, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    indices0 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] == -1]
    indices1 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] in indices0]
    contours1 = [contours[i] for i in indices1]
    contours1_filtered = [ctr for ctr in contours1 if cv2.contourArea(ctr) > float(res_th)*float(res_th)/4000]
    return contours1_filtered, img

補助処理

  • 輪郭周辺の小画像作成
    輪郭周辺の画像領域の切り出し、画像と輪郭自体の原点もその領域の原点に直す。
    比較対象データの作成が一つの目的、もう一つの目的はデバッグ
  • 輪郭の塗りつぶし画像作成
    テンプレートデータ作成用、比較用データ作成用
def create_contour_area_image(img, ctr):
    x,y,w,h = cv2.boundingRect(ctr)
    rtn_img = img[y:y+h,x:x+w,:].copy()
    rtn_ctr = ctr.copy()
    origin = np.array([x,y])
    for c in rtn_ctr:
        c[0,:] -= origin
    return rtn_img, rtn_ctr

# ctr: Should be output of create_contour_area_image() (Origin of points is the origin of bounding box)
# img_shape: Optional, tuple of (image_height, image_width), if omitted, calculated from ctr
def create_solid_contour(ctr, img_shape=(int(0),int(0))):
    if img_shape == (int(0),int(0)):
        _,_,w,h = cv2.boundingRect(ctr)
    else:
        h,w = img_shape
    img = np.zeros((h,w), 'uint8')
    img = cv2.drawContours(img, [ctr], -1, 255, -1)
    return img

# ctr: Should be output of create_contour_area_image() (Origin of points is the origin of bounding box)
def create_upright_solid_contour(ctr):
    (cx,cy),(w,h),angle = cv2.minAreaRect(ctr)
    M = cv2.getRotationMatrix2D((cx,cy), angle, 1)
    for i in range(ctr.shape[0]):
        ctr[i,0,:] = ( M @ np.array([ctr[i,0,0], ctr[i,0,1], 1]) ).astype('int')
    rect = cv2.boundingRect(ctr)
    img = np.zeros((rect[3],rect[2]), 'uint8')
    ctr -= rect[0:2]
    M[:,2] -= rect[0:2]
    img = cv2.drawContours(img, [ctr], -1, 255,-1)
    return img, M, ctr

点数文字認識処理の整理

データセット

各輪郭について、処理の中で何度か使われる生成データ。
クラスでまとめておく。

class contour_dataset:
    def __init__(self, ctr):
        self.ctr = ctr.copy()
        self.rrect = cv2.minAreaRect(ctr)
        self.box = cv2.boxPoints(self.rrect)
        self.solid = create_solid_contour(ctr)
        self.pts = np.array([p for p in ctr[:,0,:]])

class template_dataset:
    def __init__(self, ctr, num, selected_idx=[0]):
        self.ctr = ctr.copy()
        self.num = num
        self.rrect = cv2.minAreaRect(ctr)
        self.box = cv2.boxPoints(self.rrect)
        if num == 0:
            self.solid,_,_ = create_upright_solid_contour(ctr)
        else:
            self.solid = create_solid_contour(ctr)
        self.pts = np.array([ctr[idx,0,:] for idx in selected_idx])

ICP処理

補助処理と本体。

  • 最近傍点探索処理は高速化版。
    2点間の距離を計算していましたが、そうするとルート計算が入って計算が重くなるかと。
    距離の2乗の計算であれば計算が軽く、また、大小比較をするだけなら2乗したものでも問題ないので、そのように変更しました。
# pts: list of 2D points, or ndarray of shape (n,2)
# query: 2D point to find nearest neighbor
def find_nearest_neighbor(pts, query):
    min_distance_sq = float('inf')
    min_idx = 0
    for i, p in enumerate(pts):
        d = np.dot(query - p, query - p)
        if(d < min_distance_sq):
            min_distance_sq = d
            min_idx = i
    return min_idx, np.sqrt(min_distance_sq)

# src, dst: ndarray, shape is (n,2) (n: number of points)
def estimate_affine_2d(src, dst):
    n = min(src.shape[0], dst.shape[0])
    x = dst[0:n].flatten()
    A = np.zeros((2*n,6))
    for i in range(n):
        A[i*2,0] = src[i,0]
        A[i*2,1] = src[i,1]
        A[i*2,2] = 1
        A[i*2+1,3] = src[i,0]
        A[i*2+1,4] = src[i,1]
        A[i*2+1,5] = 1
    M = np.linalg.inv(A.T @ A) @ A.T @ x
    return M.reshape([2,3])

# Find optimum affine matrix using ICP algorithm
# src_pts: ndarray, shape is (n_s,2) (n_s: number of points)
# dst_pts: ndarray, shape is (n_d,2) (n_d: number of points, n_d should be larger or equal to n_s)
# initial_matrix: ndarray, shape is (2,3)
def icp(src_pts, dst_pts, max_iter=20, initial_matrix=np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])):
    default_affine_matrix = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
    if dst_pts.shape[0] < src_pts.shape[0]:
        print("icp: Insufficient destination points")
        return default_affine_matrix, False
    if initial_matrix.shape != (2,3):
        print("icp: Illegal shape of initial_matrix")
        return default_affine_matrix, False
    M = initial_matrix
    # Store indices of the nearest neighbor point of dst_pts to the converted point of src_pts
    nn_idx = []
    for i in range(max_iter):
        nn_idx_tmp = []
        dst_pts_list = [p for p in dst_pts]
        idx_list = list(range(0,dst_pts.shape[0]))
        for p in src_pts:
            p2 = M @ np.array([p[0], p[1], 1])
            idx, d = find_nearest_neighbor(dst_pts_list, p2)
            nn_idx_tmp += [idx_list[idx]]
            del dst_pts_list[idx]
            del idx_list[idx]
        if nn_idx != [] and nn_idx == nn_idx_tmp:
            break
        dst_pts2 = np.zeros_like(src_pts)
        for j,idx in enumerate(nn_idx_tmp):
            dst_pts2[j,:] = dst_pts[idx,:]
        M = estimate_affine_2d(src_pts, dst_pts2)
        nn_idx = nn_idx_tmp
        if i == max_iter -1:
            return M, False
    return M, True

一致度計算処理

今まで使っていたものから少しバグ修正(デバッグ用に返す画像が間違っていた)があるぐらい。

def binary_image_similarity(img1, img2):
    if img1.shape != img2.shape:
        print('binary_image_similarity: Different image size')
        return 0.0
    xor_img = cv2.bitwise_xor(img1, img2)
    return 1.0 - np.float(np.count_nonzero(xor_img)) / (img1.shape[0]*img2.shape[1])

# src, dst: contour_dataset or template_dataset (holding member variables box, solid)
def get_transform_by_rotated_rectangle(src, dst):
    # Rotated patterns are created when starting index is slided
    dst_box2 = np.vstack([dst.box, dst.box])
    max_similarity = 0.0
    max_converted_img = np.zeros((dst.solid.shape[1], dst.solid.shape[0]), 'uint8')
    for i in range(4):
        M = cv2.getAffineTransform(src.box[0:3], dst_box2[i:i+3])
        converted_img = cv2.warpAffine(src.solid, M, dsize=(dst.solid.shape[1], dst.solid.shape[0]), flags=cv2.INTER_NEAREST)
        similarity = binary_image_similarity(converted_img, dst.solid)
        if similarity > max_similarity:
            M_rtn = M
            max_similarity = similarity
            max_converted_img = converted_img
    return M_rtn, max_similarity, max_converted_img

def get_similarity_with_template(target_data, template_data, sim_th_high=0.95, sim_th_low=0.7):
    _,(w1,h1), _ = target_data.rrect
    _,(w2,h2), _ = template_data.rrect
    r = w1/h1 if w1 < h1 else h1/w1
    r = r * h2/w2 if w2 < h2 else r * w2/h2
    M, sim_init, _ = get_transform_by_rotated_rectangle(template_data, target_data)
    if sim_init > sim_th_high or sim_init < sim_th_low or r > 1.4 or r < 0.7:
        dsize = (template_data.solid.shape[1], template_data.solid.shape[0])
        flags = cv2.INTER_NEAREST|cv2.WARP_INVERSE_MAP
        converted_img = cv2.warpAffine(target_data.solid, M, dsize=dsize, flags=flags)
        return sim_init, converted_img
    M, _ = icp(template_data.pts, target_data.pts, initial_matrix=M)
    Minv = cv2.invertAffineTransform(M)
    converted_ctr = np.zeros_like(target_data.ctr)
    for i in range(target_data.ctr.shape[0]):
        converted_ctr[i,0,:] = (Minv[:,0:2] @ target_data.ctr[i,0,:]) + Minv[:,2]
    converted_img = create_solid_contour(converted_ctr, img_shape=template_data.solid.shape)
    val = binary_image_similarity(converted_img, template_data.solid)
    return val, converted_img

def get_similarity_with_template_zero(target_data, template_data):
    dsize = (template_data.solid.shape[1], template_data.solid.shape[0])
    converted_img = cv2.resize(target_data.solid, dsize=dsize, interpolation=cv2.INTER_NEAREST)
    val = binary_image_similarity(converted_img, template_data.solid)
    return val, converted_img

def get_similarities(target, templates):
    similarities = []
    converted_imgs = []
    for tmpl in templates:
        if tmpl.num == 0:
            sim,converted_img = get_similarity_with_template_zero(target, tmpl)
        else:
            sim,converted_img = get_similarity_with_template(target, tmpl)
        similarities += [sim]
        converted_imgs += [converted_img]
    return similarities, converted_imgs

# target: Single contour to compare
# templates: List of template_dataset (for numbers 0, 1, 2, 3, 5)
# svm: Trained SVM
# return: determined number (0,1,2,3,5), -1 if none corresponds
def determine_number(target, templates, svm):
    similarities,_ = get_similarities(target, templates)
    _, result = svm.predict(np.array(similarities))
    return int(result[0])

SVM学習関連処理

学習用データのサンプルでは、同じ状況の再現のため、乱数のシードを指定できるようにしました。

def get_random_sample(data_in, labels_in, selected_labels, n_samples, seed=None):
    random.seed(seed)
    data_rtn = []
    labels_rtn = []
    for lab in selected_labels:
        samples = [d for i,d in enumerate(data_in) if labels_in[i]==lab]
        n = min(n_samples, len(samples))
        data_rtn += random.sample(samples, n)
        labels_rtn += [lab] * n
    return data_rtn, labels_rtn

def prepare_svm(train_data, train_labels):
    svm = cv2.ml.SVM_create()
    svm.setKernel(cv2.ml.SVM_LINEAR)
    svm.setType(cv2.ml.SVM_C_SVC)
    svm.setC(100)
    svm.setGamma(1)
    svm.train(np.array(train_data, 'float32'), cv2.ml.ROW_SAMPLE, np.array(train_labels))
    return svm

def print_stat(svm_results, svm_labels):
    stats = {k:{k2:0 for k2 in [-1, 0, 1, 2, 3, 5]} for k in [-1, 0, 1, 2, 3, 5]}
    for res, lab in zip(svm_results[1], svm_labels):
        stats[lab][int(res[0])] += 1
    for k,v in stats.items():
        print('label {:>2}'.format(k), ': {', end='')
        for k2,v2 in v.items():
            print('{}: {:>2}, '.format(k2,v2), end='')
        print('}')

def print_similarity_vector(sim, end=''):
    print('[',end='')
    for s in sim: print('{:.3f}, '.format(s), end='')
    print(']', end=end)

データの用意

ここから、実データからの必要なデータ取り出しを行っていきます。
一部のデータは、後で再利用のために保存します。

輪郭データの取得

original_imgs = [img1, img2, img3, img4, img5, img6, img7]
resized_imgs = []
resized_ctrs = []
original_img_idx = []
subctrs_all = []
subimgs_all = []
for idx, original_img in enumerate(original_imgs):
    ctrs, img = detect_candidate_contours(original_img)
    resized_imgs += [img]
    resized_ctrs += [ctrs]
    
    for ctr in ctrs:
        original_img_idx += [idx]
        subimg,subctr = create_contour_area_image(img, ctr)
        subctrs_all += [subctr]
        subimgs_all += [subimg]

テンプレートデータの用意

今までやっていた通り、3つ目の画像(2020年)、5つ目の画像(2021年)では、"3点"のデータがないので、1つ目の画像(2019年)のデータで代用します。

ctrs1_idx_zero = 26
ctrs1_idx_one = 27
ctrs1_idx_two = 24
ctrs1_idx_three = 33
ctrs1_idx_five = 8
ctrs1_idx_numbers = [ctrs1_idx_zero, ctrs1_idx_one, ctrs1_idx_two, ctrs1_idx_three, ctrs1_idx_five]

subimgs1 = []
subctrs1 = []
binimgs1 = []
subctrs1_selected_pts = []
for i,idx in enumerate(ctrs1_idx_numbers):
    img, ctr = create_contour_area_image(resized_imgs[0], resized_ctrs[0][idx])
    binimg, M, ctr2 = create_upright_solid_contour(ctr)
    img2 = cv2.warpAffine(img.copy(), M, (binimg.shape[1], binimg.shape[0]))
    subimgs1 += [img2]
    subctrs1 += [ctr2]
    binimgs1 += [binimg]
    ctr_selected_pts = [j for j in range(ctr2.shape[0]) if j % 5 == 0]
    if i != 0:
        subctrs1_selected_pts += [ctr_selected_pts]
    ctr_img = cv2.drawContours(img2.copy(), [ctr2], -1, (0,255,0), 2)
    pts_img = img2.copy()
    for p in ctr_selected_pts:
        pts_img = cv2.drawMarker(pts_img, ctr2[p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(3,5,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(3,5,6+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
    plt.subplot(3,5,11+i), plt.imshow(cv2.cvtColor(pts_img, cv2.COLOR_BGR2RGB), cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220327233712p:plain

ctrs3_idx_zero = 7
ctrs3_idx_one = 4
ctrs3_idx_two = 17
ctrs3_idx_five = 6
ctrs3_idx_numbers = [ctrs3_idx_zero, ctrs3_idx_one, ctrs3_idx_two, ctrs3_idx_five]

subimgs3 = []
subctrs3 = []
binimgs3 = []
subctrs3_selected_pts = []
for i,idx in enumerate(ctrs3_idx_numbers):
    img, ctr = create_contour_area_image(resized_imgs[2], resized_ctrs[2][idx])
    binimg, M, ctr2 = create_upright_solid_contour(ctr)
    img2 = cv2.warpAffine(img.copy(), M, (binimg.shape[1], binimg.shape[0]))
    subimgs3 += [img2]
    subctrs3 += [ctr2]
    binimgs3 += [binimg]
    ctr_selected_pts = [j for j in range(ctr2.shape[0]) if j% 5 == 0]
    if i != 0:
        subctrs3_selected_pts += [ctr_selected_pts]
    ctr_img = cv2.drawContours(img2.copy(), [ctr2], -1, (0,255,0), 2)
    pts_img = img2.copy()
    for p in ctr_selected_pts:
        pts_img = cv2.drawMarker(pts_img, ctr2[p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(3,4,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(3,4,5+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
    plt.subplot(3,4,9+i), plt.imshow(cv2.cvtColor(pts_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

subimgs3.insert(3, subimgs1[3])
subctrs3.insert(3, subctrs1[3])
binimgs3.insert(3, binimgs1[3])
subctrs3_selected_pts.insert(2, subctrs1_selected_pts[2])

f:id:nokixa:20220327233715p:plain

ctrs5_idx_zero = 3
ctrs5_idx_one = 4
ctrs5_idx_two = 2
ctrs5_idx_five = 5
ctrs5_idx_numbers = [ctrs5_idx_zero, ctrs5_idx_one, ctrs5_idx_two, ctrs5_idx_five]

subimgs5 = []
subctrs5 = []
binimgs5 = []
subctrs5_selected_pts = []
for i,idx in enumerate(ctrs5_idx_numbers):
    img, ctr = create_contour_area_image(resized_imgs[4], resized_ctrs[4][idx])
    binimg, M, ctr2 = create_upright_solid_contour(ctr)
    img2 = cv2.warpAffine(img.copy(), M, (binimg.shape[1], binimg.shape[0]))
    subimgs5 += [img2]
    subctrs5 += [ctr2]
    binimgs5 += [binimg]
    ctr_selected_pts = [j for j in range(ctr2.shape[0]) if j % 5 == 0]
    if i != 0:
        subctrs5_selected_pts += [ctr_selected_pts]
    ctr_img = cv2.drawContours(img2.copy(), [ctr2], -1, (0,255,0), 2)
    pts_img = img2.copy()
    for p in ctr_selected_pts:
        pts_img = cv2.drawMarker(pts_img, ctr2[p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(3,4,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(3,4,5+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
    plt.subplot(3,4,9+i), plt.imshow(cv2.cvtColor(pts_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

subimgs5.insert(3, subimgs1[3])
subctrs5.insert(3, subctrs1[3])
binimgs5.insert(3, binimgs1[3])
subctrs5_selected_pts.insert(2, subctrs1_selected_pts[2])

f:id:nokixa:20220327233718p:plain

正解ラベル

labels1 = [-1,-1,-1,-1,-1
           ,-1,5,0,5,1
           ,5,0,2,1,2
           ,-1,-1,1,1,5
           ,0,2,5,0,2
           ,5,0,1,2,-1
           ,5,1,2,3,1
           ,5,0,-1]

labels2 = [-1,-1,-1,-1,-1
           ,-1,5,0,5,1
           ,5,0,2,1,2
           ,-1,-1,1,1,5
           ,0,-1,2,5,0
           ,2,5,0,1,2
           ,5,0,1,-1,2
           ,-1,-1,-1,3,-1
           ,5,0,-1,1,-1
           ,-1]

labels3 = [-1,-1,-1,-1,1
           ,1,5,0,1,1
           ,5,0,5,0,-1
           ,-1,-1,2,-1,-1
           ,-1,1,1,1,-1
           ,1,-1,-1,1,1
           ,-1,2,-1,1,-1
           ,1,2,-1,1,-1
           ,-1,2,5,-1,0
           ,-1,1,1]

labels4 = [-1,-1,-1,-1,-1
           ,-1,-1,-1,-1,-1
           ,-1,-1,-1,-1,-1
           ,-1,-1,-1,1,1
           ,1,1,1,1,1
           ,-1,5,0,2,5
           ,0,2,1,2,2
           ,-1,-1,-1,1,1
           ,1]

labels5 = [-1,-1,2,0,1
           ,5,-1,1,1,1
           ,1,1,1,1,1
           ,1,-1,5,1,0
           ,5,1,2,0,5
           ,0,2,1,2,2
           ,-1,-1,1,1,1
           ]

labels6 = [-1,0,1,5,2
            ,-1,1,1,1,1
            ,5,1,0,5,0
            ,2,1,5,0,2
            ,2,2,1,-1,-1
            ,1,1,1,1,1
            ,1,1,1]

labels7 = [-1,-1,-1,-1,-1
           ,-1,1,2,2,2
           ,2,1,2,2,2
           ,1,-1,-1,-1,2
           ,1,2,1,1]

labels_all = labels1 + labels2 + labels3 + labels4 + labels5 + labels6 + labels7

データセットの作成

# Prepare template data for "0"
templates1 = [template_dataset(subctrs1[0], 0)]
templates3 = [template_dataset(subctrs3[0], 0)]
templates5 = [template_dataset(subctrs5[0], 0)]
# Prepare template data for other numbers
numbers = [1, 2, 3, 5]
for i,num in enumerate(numbers):
    templates1 += [template_dataset(subctrs1[i+1], num, subctrs1_selected_pts[i])]
    templates3 += [template_dataset(subctrs3[i+1], num, subctrs3_selected_pts[i])]
    templates5 += [template_dataset(subctrs5[i+1], num, subctrs5_selected_pts[i])]
ctr_datasets_all = [contour_dataset(ctr) for ctr in subctrs_all]

一致度計算実施

templates_sel = [1,1,3,5,5,5,5]
def select_template(i):
    img_idx = original_img_idx[i]
    if templates_sel[img_idx] == 1:
        return templates1
    elif templates_sel[img_idx] == 3:
        return templates3
    elif templates_sel[img_idx] == 5:
        return templates5
    else:
        return templates1

similarities_all = []
converted_imgs_all = []
print('  Contour No. ', end='')
for i,target_ctr in enumerate(ctr_datasets_all):
    templates = select_template(i)
    print(i, ' ', end='')
    sims, imgs = get_similarities(target_ctr, templates)
    similarities_all += [sims]
    converted_imgs_all += [imgs]
  Contour No. 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140  141  142  143  144  145  146  147  148  149  150  151  152  153  154  155  156  157  158  159  160  161  162  163  164  165  166  167  168  169  170  171  172  173  174  175  176  177  178  179  180  181  182  183  184  185  186  187  188  189  190  191  192  193  194  195  196  197  198  199  200  201  202  203  204  205  206  207  208  209  210  211  212  213  214  215  216  217  218  219  220  221  222  223  224  225  226  227  228  229  230  231  232  233  234  235  236  237  238  239  240  241  242  243  244  245  246  247  248  249  250  251  252  253  254  255  256  257  258  259  260  261  262  263  264  

得られた一致度を全て表示しておきます。
長くなっちゃいますが…

for sim, lab in zip(similarities_all, labels_all):
    print('label {:>2}'.format(lab), ': ', end='')
    print_similarity_vector(sim, end='\n')
label -1 : [0.825, 0.879, 0.666, 0.649, 0.710, ]
label -1 : [0.820, 0.783, 0.658, 0.667, 0.699, ]
label -1 : [0.841, 0.757, 0.684, 0.676, 0.733, ]
label -1 : [0.816, 0.730, 0.676, 0.699, 0.731, ]
label -1 : [0.844, 0.765, 0.691, 0.708, 0.738, ]
label -1 : [0.860, 0.746, 0.697, 0.681, 0.736, ]
label  5 : [0.717, 0.816, 0.685, 0.812, 0.919, ]
label  0 : [0.868, 0.836, 0.608, 0.679, 0.731, ]
label  5 : [0.746, 0.748, 0.672, 0.792, 0.950, ]
label  1 : [0.729, 0.942, 0.766, 0.743, 0.731, ]
label  5 : [0.731, 0.774, 0.668, 0.781, 0.921, ]
label  0 : [0.942, 0.762, 0.695, 0.669, 0.728, ]
label  2 : [0.660, 0.753, 0.937, 0.754, 0.724, ]
label  1 : [0.744, 0.947, 0.710, 0.702, 0.686, ]
label  2 : [0.650, 0.777, 0.957, 0.754, 0.702, ]
label -1 : [0.682, 0.747, 0.643, 0.630, 0.647, ]
label -1 : [0.796, 0.672, 0.715, 0.783, 0.878, ]
label  1 : [0.709, 0.952, 0.801, 0.784, 0.755, ]
label  1 : [0.625, 0.955, 0.834, 0.807, 0.805, ]
label  5 : [0.732, 0.759, 0.663, 0.786, 0.918, ]
label  0 : [0.967, 0.722, 0.670, 0.659, 0.732, ]
label  2 : [0.665, 0.715, 0.945, 0.751, 0.682, ]
label  5 : [0.719, 0.680, 0.691, 0.801, 0.898, ]
label  0 : [0.963, 0.722, 0.678, 0.664, 0.728, ]
label  2 : [0.670, 0.732, 0.963, 0.754, 0.697, ]
label  5 : [0.750, 0.693, 0.661, 0.674, 0.913, ]
label  0 : [0.895, 0.734, 0.684, 0.629, 0.752, ]
label  1 : [0.744, 0.954, 0.740, 0.708, 0.706, ]
label  2 : [0.681, 0.720, 0.942, 0.750, 0.692, ]
label -1 : [0.820, 0.738, 0.691, 0.713, 0.750, ]
label  5 : [0.671, 0.761, 0.653, 0.610, 0.784, ]
label  1 : [0.699, 0.953, 0.711, 0.688, 0.718, ]
label  2 : [0.621, 0.812, 0.958, 0.745, 0.736, ]
label  3 : [0.701, 0.664, 0.773, 1.000, 0.669, ]
label  1 : [0.542, 0.959, 0.837, 0.809, 0.818, ]
label  5 : [0.737, 0.728, 0.704, 0.782, 0.924, ]
label  0 : [0.865, 0.806, 0.655, 0.669, 0.721, ]
label -1 : [0.690, 0.781, 0.546, 0.677, 0.695, ]
label -1 : [0.840, 0.879, 0.677, 0.662, 0.765, ]
label -1 : [0.827, 0.787, 0.696, 0.688, 0.737, ]
label -1 : [0.844, 0.770, 0.645, 0.645, 0.705, ]
label -1 : [0.829, 0.750, 0.676, 0.751, 0.701, ]
label -1 : [0.862, 0.735, 0.684, 0.691, 0.728, ]
label -1 : [0.855, 0.758, 0.664, 0.664, 0.732, ]
label  5 : [0.709, 0.806, 0.687, 0.792, 0.901, ]
label  0 : [0.879, 0.821, 0.640, 0.667, 0.724, ]
label  5 : [0.734, 0.762, 0.677, 0.706, 0.928, ]
label  1 : [0.728, 0.941, 0.771, 0.757, 0.753, ]
label  5 : [0.726, 0.777, 0.675, 0.730, 0.931, ]
label  0 : [0.932, 0.782, 0.628, 0.660, 0.724, ]
label  2 : [0.657, 0.751, 0.946, 0.752, 0.731, ]
label  1 : [0.748, 0.936, 0.737, 0.708, 0.705, ]
label  2 : [0.636, 0.782, 0.956, 0.758, 0.714, ]
label -1 : [0.780, 0.648, 0.735, 0.787, 0.825, ]
label -1 : [0.687, 0.727, 0.642, 0.646, 0.660, ]
label  1 : [0.708, 0.943, 0.796, 0.778, 0.749, ]
label  1 : [0.618, 0.952, 0.836, 0.811, 0.812, ]
label  5 : [0.734, 0.775, 0.682, 0.785, 0.920, ]
label  0 : [0.962, 0.708, 0.666, 0.664, 0.727, ]
label -1 : [0.778, 0.783, 0.687, 0.701, 0.737, ]
label  2 : [0.663, 0.722, 0.943, 0.748, 0.681, ]
label  5 : [0.739, 0.666, 0.692, 0.812, 0.904, ]
label  0 : [0.962, 0.704, 0.672, 0.665, 0.724, ]
label  2 : [0.668, 0.736, 0.951, 0.746, 0.700, ]
label  5 : [0.750, 0.693, 0.653, 0.675, 0.922, ]
label  0 : [0.905, 0.707, 0.662, 0.630, 0.752, ]
label  1 : [0.745, 0.940, 0.738, 0.712, 0.701, ]
label  2 : [0.685, 0.718, 0.946, 0.755, 0.680, ]
label  5 : [0.750, 0.675, 0.710, 0.794, 0.915, ]
label  0 : [0.827, 0.785, 0.669, 0.627, 0.743, ]
label  1 : [0.695, 0.945, 0.717, 0.692, 0.705, ]
label -1 : [0.775, 0.859, 0.708, 0.691, 0.725, ]
label  2 : [0.610, 0.814, 0.959, 0.748, 0.742, ]
label -1 : [0.798, 0.863, 0.684, 0.697, 0.788, ]
label -1 : [0.805, 0.851, 0.696, 0.675, 0.733, ]
label -1 : [0.786, 0.765, 0.675, 0.659, 0.686, ]
label  3 : [0.708, 0.671, 0.767, 0.969, 0.668, ]
label -1 : [0.942, 0.719, 0.693, 0.695, 0.736, ]
label  5 : [0.750, 0.737, 0.693, 0.781, 0.897, ]
label  0 : [0.845, 0.789, 0.660, 0.674, 0.745, ]
label -1 : [0.802, 0.805, 0.702, 0.701, 0.749, ]
label  1 : [0.533, 0.944, 0.835, 0.813, 0.818, ]
label -1 : [0.796, 0.821, 0.687, 0.696, 0.698, ]
label -1 : [0.776, 0.824, 0.753, 0.714, 0.738, ]
label -1 : [0.824, 0.699, 0.615, 0.639, 0.693, ]
label -1 : [0.694, 0.682, 0.633, 0.664, 0.656, ]
label -1 : [0.868, 0.735, 0.673, 0.691, 0.761, ]
label -1 : [0.825, 0.735, 0.673, 0.680, 0.761, ]
label  1 : [0.691, 0.954, 0.762, 0.784, 0.729, ]
label  1 : [0.668, 0.947, 0.743, 0.743, 0.731, ]
label  5 : [0.765, 0.711, 0.683, 0.706, 0.938, ]
label  0 : [1.000, 0.710, 0.650, 0.628, 0.756, ]
label  1 : [0.646, 0.947, 0.714, 0.707, 0.692, ]
label  1 : [0.666, 0.945, 0.743, 0.732, 0.731, ]
label  5 : [0.679, 0.768, 0.687, 0.752, 0.929, ]
label  0 : [0.843, 0.753, 0.688, 0.635, 0.697, ]
label  5 : [0.759, 0.717, 0.690, 0.681, 0.934, ]
label  0 : [0.956, 0.690, 0.633, 0.625, 0.690, ]
label -1 : [0.825, 0.811, 0.735, 0.761, 0.692, ]
label -1 : [0.811, 0.700, 0.636, 0.668, 0.719, ]
label -1 : [0.793, 0.667, 0.632, 0.641, 0.730, ]
label  2 : [0.650, 0.785, 0.979, 0.762, 0.705, ]
label -1 : [0.784, 0.751, 0.686, 0.691, 0.701, ]
label -1 : [0.847, 0.729, 0.668, 0.679, 0.733, ]
label -1 : [0.808, 0.697, 0.650, 0.660, 0.719, ]
label  1 : [0.741, 0.940, 0.738, 0.715, 0.674, ]
label  1 : [0.671, 0.944, 0.730, 0.717, 0.691, ]
label  1 : [0.670, 0.941, 0.730, 0.721, 0.698, ]
label -1 : [0.797, 0.747, 0.689, 0.692, 0.734, ]
label  1 : [0.685, 0.945, 0.718, 0.695, 0.685, ]
label -1 : [0.822, 0.713, 0.644, 0.692, 0.706, ]
label -1 : [0.814, 0.667, 0.624, 0.650, 0.726, ]
label  1 : [0.706, 0.952, 0.716, 0.713, 0.690, ]
label  1 : [0.639, 0.954, 0.687, 0.689, 0.674, ]
label -1 : [0.798, 0.671, 0.620, 0.654, 0.749, ]
label  2 : [0.662, 0.711, 0.951, 0.753, 0.651, ]
label -1 : [0.811, 0.718, 0.688, 0.723, 0.721, ]
label  1 : [0.639, 0.960, 0.683, 0.690, 0.674, ]
label -1 : [0.802, 0.692, 0.615, 0.658, 0.731, ]
label  1 : [0.702, 0.948, 0.726, 0.715, 0.678, ]
label  2 : [0.601, 0.755, 0.944, 0.754, 0.676, ]
label -1 : [0.819, 0.714, 0.647, 0.730, 0.719, ]
label  1 : [0.677, 0.941, 0.718, 0.706, 0.692, ]
label -1 : [0.823, 0.684, 0.640, 0.672, 0.744, ]
label -1 : [0.648, 0.847, 0.697, 0.666, 0.681, ]
label  2 : [0.592, 0.757, 0.965, 0.757, 0.697, ]
label  5 : [0.753, 0.692, 0.683, 0.684, 0.925, ]
label -1 : [0.648, 0.762, 0.662, 0.664, 0.762, ]
label  0 : [0.956, 0.696, 0.640, 0.629, 0.703, ]
label -1 : [0.811, 0.688, 0.637, 0.671, 0.753, ]
label  1 : [0.678, 0.944, 0.702, 0.689, 0.690, ]
label  1 : [0.656, 0.949, 0.684, 0.703, 0.679, ]
label -1 : [0.561, 0.841, 0.807, 0.802, 0.845, ]
label -1 : [0.791, 0.805, 0.721, 0.730, 0.730, ]
label -1 : [0.846, 0.798, 0.613, 0.641, 0.669, ]
label -1 : [0.797, 0.786, 0.683, 0.681, 0.721, ]
label -1 : [0.849, 0.748, 0.700, 0.676, 0.739, ]
label -1 : [0.867, 0.731, 0.655, 0.648, 0.766, ]
label -1 : [0.874, 0.761, 0.672, 0.641, 0.717, ]
label -1 : [0.833, 0.817, 0.648, 0.674, 0.684, ]
label -1 : [0.908, 0.825, 0.694, 0.703, 0.659, ]
label -1 : [0.950, 0.822, 0.667, 0.693, 0.710, ]
label -1 : [0.894, 0.812, 0.611, 0.679, 0.680, ]
label -1 : [0.854, 0.730, 0.641, 0.644, 0.737, ]
label -1 : [0.894, 0.801, 0.696, 0.696, 0.714, ]
label -1 : [0.806, 0.769, 0.694, 0.658, 0.679, ]
label -1 : [0.908, 0.816, 0.677, 0.696, 0.766, ]
label -1 : [0.700, 0.706, 0.656, 0.650, 0.686, ]
label -1 : [0.804, 0.709, 0.707, 0.770, 0.848, ]
label -1 : [0.749, 0.721, 0.664, 0.695, 0.740, ]
label  1 : [0.706, 0.948, 0.771, 0.721, 0.749, ]
label  1 : [0.684, 0.959, 0.769, 0.728, 0.740, ]
label  1 : [0.658, 0.955, 0.807, 0.763, 0.778, ]
label  1 : [0.641, 0.958, 0.837, 0.798, 0.790, ]
label  1 : [0.512, 0.966, 0.874, 0.829, 0.841, ]
label  1 : [0.696, 0.948, 0.776, 0.726, 0.749, ]
label  1 : [0.684, 0.950, 0.773, 0.722, 0.742, ]
label -1 : [0.502, 0.738, 0.779, 0.647, 0.712, ]
label  5 : [0.731, 0.688, 0.678, 0.792, 0.920, ]
label  0 : [0.931, 0.718, 0.646, 0.630, 0.701, ]
label  2 : [0.652, 0.753, 0.936, 0.755, 0.659, ]
label  5 : [0.738, 0.712, 0.653, 0.688, 0.913, ]
label  0 : [0.904, 0.825, 0.639, 0.635, 0.716, ]
label  2 : [0.653, 0.744, 0.935, 0.749, 0.660, ]
label  1 : [0.639, 0.950, 0.806, 0.758, 0.779, ]
label  2 : [0.640, 0.760, 0.930, 0.750, 0.674, ]
label  2 : [0.632, 0.763, 0.933, 0.751, 0.670, ]
label -1 : [0.639, 0.866, 0.668, 0.667, 0.713, ]
label -1 : [0.593, 0.783, 0.596, 0.620, 0.762, ]
label -1 : [0.524, 0.772, 0.683, 0.692, 0.714, ]
label  1 : [0.698, 0.948, 0.756, 0.710, 0.746, ]
label  1 : [0.765, 0.947, 0.749, 0.705, 0.716, ]
label  1 : [0.667, 0.950, 0.791, 0.732, 0.762, ]
label -1 : [0.826, 0.729, 0.671, 0.645, 0.703, ]
label -1 : [0.827, 0.738, 0.665, 0.650, 0.723, ]
label  2 : [0.628, 0.794, 0.939, 0.756, 0.670, ]
label  0 : [0.812, 0.814, 0.643, 0.620, 0.702, ]
label  1 : [0.556, 0.970, 0.867, 0.825, 0.738, ]
label  5 : [0.639, 0.775, 0.670, 0.784, 0.964, ]
label -1 : [0.473, 0.880, 0.593, 0.620, 0.661, ]
label  1 : [0.659, 0.959, 0.840, 0.745, 0.738, ]
label  1 : [0.770, 0.939, 0.776, 0.718, 0.738, ]
label  1 : [0.704, 0.951, 0.778, 0.726, 0.737, ]
label  1 : [0.713, 0.949, 0.784, 0.748, 0.733, ]
label  1 : [0.637, 0.951, 0.816, 0.756, 0.723, ]
label  1 : [0.701, 0.945, 0.831, 0.750, 0.752, ]
label  1 : [0.511, 0.955, 0.867, 0.829, 0.835, ]
label  1 : [0.708, 0.930, 0.788, 0.743, 0.734, ]
label  1 : [0.674, 0.951, 0.804, 0.724, 0.743, ]
label -1 : [0.909, 0.806, 0.682, 0.671, 0.690, ]
label  5 : [0.624, 0.674, 0.638, 0.662, 0.919, ]
label  1 : [0.636, 0.946, 0.806, 0.763, 0.733, ]
label  0 : [0.840, 0.820, 0.620, 0.627, 0.713, ]
label  5 : [0.710, 0.685, 0.683, 0.681, 0.912, ]
label  1 : [0.722, 0.942, 0.823, 0.780, 0.737, ]
label  2 : [0.633, 0.760, 0.939, 0.767, 0.675, ]
label  0 : [0.925, 0.727, 0.645, 0.649, 0.690, ]
label  5 : [0.780, 0.722, 0.641, 0.661, 0.931, ]
label  0 : [0.932, 0.750, 0.656, 0.633, 0.716, ]
label  2 : [0.653, 0.723, 0.938, 0.762, 0.641, ]
label  1 : [0.667, 0.917, 0.815, 0.761, 0.734, ]
label  2 : [0.693, 0.757, 0.922, 0.744, 0.687, ]
label  2 : [0.675, 0.738, 0.931, 0.753, 0.655, ]
label -1 : [0.581, 0.792, 0.594, 0.588, 0.699, ]
label -1 : [0.578, 0.823, 0.707, 0.668, 0.700, ]
label  1 : [0.704, 0.957, 0.713, 0.668, 0.743, ]
label  1 : [0.784, 0.927, 0.759, 0.718, 0.705, ]
label  1 : [0.728, 0.948, 0.756, 0.704, 0.751, ]
label -1 : [0.748, 0.825, 0.723, 0.730, 0.782, ]
label  0 : [0.887, 0.816, 0.638, 0.672, 0.681, ]
label  1 : [0.506, 0.960, 0.864, 0.842, 0.852, ]
label  5 : [0.704, 0.795, 0.679, 0.799, 0.926, ]
label  2 : [0.588, 0.763, 0.923, 0.752, 0.676, ]
label -1 : [0.507, 0.676, 0.625, 0.583, 0.567, ]
label  1 : [0.563, 0.922, 0.856, 0.820, 0.832, ]
label  1 : [0.727, 0.938, 0.820, 0.791, 0.779, ]
label  1 : [0.748, 0.937, 0.736, 0.714, 0.754, ]
label  1 : [0.751, 0.937, 0.763, 0.728, 0.775, ]
label  5 : [0.608, 0.700, 0.751, 0.711, 0.911, ]
label  1 : [0.729, 0.879, 0.789, 0.756, 0.684, ]
label  0 : [0.786, 0.820, 0.623, 0.615, 0.703, ]
label  5 : [0.677, 0.712, 0.638, 0.816, 0.885, ]
label  0 : [0.881, 0.804, 0.612, 0.624, 0.705, ]
label  2 : [0.615, 0.789, 0.944, 0.771, 0.665, ]
label  1 : [0.605, 0.951, 0.846, 0.807, 0.809, ]
label  5 : [0.706, 0.694, 0.668, 0.698, 0.912, ]
label  0 : [0.929, 0.736, 0.664, 0.646, 0.695, ]
label  2 : [0.625, 0.736, 0.918, 0.763, 0.659, ]
label  2 : [0.674, 0.736, 0.919, 0.777, 0.657, ]
label  2 : [0.672, 0.740, 0.930, 0.746, 0.667, ]
label  1 : [0.706, 0.941, 0.765, 0.729, 0.751, ]
label -1 : [0.566, 0.826, 0.604, 0.581, 0.680, ]
label -1 : [0.587, 0.828, 0.695, 0.674, 0.676, ]
label  1 : [0.758, 0.884, 0.736, 0.702, 0.679, ]
label  1 : [0.784, 0.925, 0.756, 0.736, 0.742, ]
label  1 : [0.750, 0.933, 0.714, 0.690, 0.746, ]
label  1 : [0.709, 0.925, 0.799, 0.764, 0.711, ]
label  1 : [0.567, 0.959, 0.852, 0.809, 0.740, ]
label  1 : [0.578, 0.938, 0.850, 0.814, 0.822, ]
label  1 : [0.767, 0.938, 0.717, 0.696, 0.772, ]
label  1 : [0.727, 0.935, 0.768, 0.739, 0.744, ]
label -1 : [0.851, 0.864, 0.698, 0.668, 0.646, ]
label -1 : [0.811, 0.783, 0.728, 0.724, 0.762, ]
label -1 : [0.960, 0.817, 0.667, 0.725, 0.724, ]
label -1 : [0.818, 0.827, 0.724, 0.692, 0.776, ]
label -1 : [0.796, 0.750, 0.694, 0.659, 0.664, ]
label -1 : [0.872, 0.776, 0.711, 0.697, 0.678, ]
label  1 : [0.755, 0.939, 0.801, 0.753, 0.738, ]
label  2 : [0.664, 0.759, 0.933, 0.752, 0.686, ]
label  2 : [0.645, 0.762, 0.933, 0.756, 0.683, ]
label  2 : [0.669, 0.760, 0.934, 0.752, 0.686, ]
label  2 : [0.640, 0.794, 0.932, 0.764, 0.669, ]
label  1 : [0.726, 0.946, 0.722, 0.670, 0.716, ]
label  2 : [0.654, 0.742, 0.931, 0.767, 0.680, ]
label  2 : [0.606, 0.841, 0.929, 0.750, 0.658, ]
label  2 : [0.587, 0.841, 0.922, 0.766, 0.691, ]
label  1 : [0.680, 0.941, 0.838, 0.783, 0.784, ]
label -1 : [0.557, 0.829, 0.675, 0.598, 0.679, ]
label -1 : [0.509, 0.771, 0.691, 0.645, 0.692, ]
label -1 : [0.518, 0.814, 0.723, 0.639, 0.694, ]
label  2 : [0.637, 0.800, 0.935, 0.765, 0.670, ]
label  1 : [0.736, 0.926, 0.743, 0.700, 0.743, ]
label  2 : [0.658, 0.780, 0.949, 0.760, 0.672, ]
label  1 : [0.738, 0.936, 0.825, 0.778, 0.768, ]
label  1 : [0.728, 0.938, 0.718, 0.671, 0.730, ]

SVM学習実施

SVMの学習を実施します。
結果が良ければ、学習したSVMモデルを保存、今後使用します。
30番の輪郭は輪郭検出がうまくいっていないのが分かっているので削除しておきます。

svm_inputs = copy.deepcopy(similarities_all)
svm_labels = copy.deepcopy(labels_all)

# Remove inadequate contour data in img1
del svm_inputs[30]
del svm_labels[30]

train_data, train_labels = get_random_sample(svm_inputs, svm_labels, [-1,0,1,2,3,5], 20, seed=123)
svm = prepare_svm(train_data, train_labels)

SVM推論実施

学習したSVMモデルの性能を確認。

result = svm.predict(np.array(svm_inputs, 'float32'))
print_stat(result, svm_labels)
label -1 : {-1: 73, 0: 13, 1:  0, 2:  0, 3:  0, 5:  3, }
label  0 : {-1:  5, 0: 22, 1:  0, 2:  0, 3:  0, 5:  0, }
label  1 : {-1:  0, 0:  0, 1: 78, 2:  0, 3:  0, 5:  0, }
label  2 : {-1:  0, 0:  0, 1:  0, 2: 39, 3:  0, 5:  0, }
label  3 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  2, 5:  0, }
label  5 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  0, 5: 29, }

"0"以外の数字の輪郭であれば全て正解していますが、非数字の輪郭で判定の失敗がいくつかあります。
判定に失敗した輪郭を確認します。

subimgs = copy.deepcopy(subimgs_all)
subctrs = copy.deepcopy(subctrs_all)

del subimgs[30]
del subctrs[30]

for i,(sims,lab,res,img,ctr) in enumerate(zip(svm_inputs, svm_labels, result[1], subimgs, subctrs)):
    if lab != res[0]:
        print('No.', i)
        print('{: }'.format(lab), ' -> ', '{: d}'.format(int(res[0])), ' [',end='')
        for s in sims: print('{:.3f}, '.format(s), end='');
        print(']')
        img = cv2.drawContours(img, [ctr], -1, (0,255,0), 1)
        plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)),plt.xticks([]),plt.yticks([])
        plt.show()
    No. 16
    -1  ->   5  [0.796, 0.672, 0.715, 0.783, 0.878, ]

f:id:nokixa:20220327233722p:plain

    No. 37
    -1  ->   0  [0.840, 0.879, 0.677, 0.662, 0.765, ]

f:id:nokixa:20220327233724p:plain

    No. 52
    -1  ->   5  [0.780, 0.648, 0.735, 0.787, 0.825, ]

f:id:nokixa:20220327233727p:plain

    No. 68
     0  ->  -1  [0.827, 0.785, 0.669, 0.627, 0.743, ]

f:id:nokixa:20220327233729p:plain

    No. 76
    -1  ->   0  [0.942, 0.719, 0.693, 0.695, 0.736, ]

f:id:nokixa:20220327233732p:plain

    No. 78
     0  ->  -1  [0.845, 0.789, 0.660, 0.674, 0.745, ]

f:id:nokixa:20220327233734p:plain

    No. 94
     0  ->  -1  [0.843, 0.753, 0.688, 0.635, 0.697, ]

f:id:nokixa:20220327233737p:plain

    No. 133
    -1  ->   0  [0.846, 0.798, 0.613, 0.641, 0.669, ]

f:id:nokixa:20220327233739p:plain

    No. 136
    -1  ->   0  [0.867, 0.731, 0.655, 0.648, 0.766, ]

f:id:nokixa:20220327233741p:plain

    No. 137
    -1  ->   0  [0.874, 0.761, 0.672, 0.641, 0.717, ]

f:id:nokixa:20220327233744p:plain

    No. 139
    -1  ->   0  [0.908, 0.825, 0.694, 0.703, 0.659, ]

f:id:nokixa:20220327233746p:plain

    No. 140
    -1  ->   0  [0.950, 0.822, 0.667, 0.693, 0.710, ]

f:id:nokixa:20220327233748p:plain

    No. 141
    -1  ->   0  [0.894, 0.812, 0.611, 0.679, 0.680, ]

f:id:nokixa:20220327233751p:plain

    No. 142
    -1  ->   0  [0.854, 0.730, 0.641, 0.644, 0.737, ]

f:id:nokixa:20220327233753p:plain

    No. 143
    -1  ->   0  [0.894, 0.801, 0.696, 0.696, 0.714, ]

f:id:nokixa:20220327233757p:plain

    No. 145
    -1  ->   0  [0.908, 0.816, 0.677, 0.696, 0.766, ]

f:id:nokixa:20220327233759p:plain

    No. 147
    -1  ->   5  [0.804, 0.709, 0.707, 0.770, 0.848, ]

f:id:nokixa:20220327233801p:plain

    No. 175
     0  ->  -1  [0.812, 0.814, 0.643, 0.620, 0.702, ]

f:id:nokixa:20220327233804p:plain

    No. 188
    -1  ->   0  [0.909, 0.806, 0.682, 0.671, 0.690, ]

f:id:nokixa:20220327233806p:plain

    No. 219
     0  ->  -1  [0.786, 0.820, 0.623, 0.615, 0.703, ]

f:id:nokixa:20220327233808p:plain

    No. 242
    -1  ->   0  [0.960, 0.817, 0.667, 0.725, 0.724, ]

f:id:nokixa:20220327233811p:plain

SVM再学習

判定に失敗した輪郭データの一部を学習データに加えます。
点数文字でない"5"が3つ見られますが、このうち2つを追加します。
その後再度判定を行ってみます。

train_data += [svm_inputs[16], svm_inputs[52]]
train_labels += [svm_labels[16], svm_labels[52]]
svm = prepare_svm(train_data, train_labels)

result = svm.predict(np.array(svm_inputs, 'float32'))
print_stat(result, svm_labels)
label -1 : {-1: 75, 0: 13, 1:  0, 2:  0, 3:  0, 5:  1, }
label  0 : {-1:  5, 0: 22, 1:  0, 2:  0, 3:  0, 5:  0, }
label  1 : {-1:  0, 0:  0, 1: 78, 2:  0, 3:  0, 5:  0, }
label  2 : {-1:  0, 0:  0, 1:  0, 2: 39, 3:  0, 5:  0, }
label  3 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  2, 5:  0, }
label  5 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  0, 5: 29, }

"5"の文字を全て正しく判別できるかと期待したが、そうはならず。
このまま進めてしまうこととします。

データ化

ここまでで得られたテンプレートデータとSVMモデルデータを保存し、点数計算アプリで使用したい。

SVMモデルの保存

SVMは保存するメソッドがあったので、それを使用します。
テンプレートデータについては、pythonでのオブジェクト保存の方法を調べるとpickle、jsonを使った2パターンが出てきます。 pickleでは、出所の分からないデータの読み込みを行うと意図しないコードを実行されてしまう、という脆弱性があるようですが、今回は自分で用意したデータを使いたいだけなので、一応問題はないかなと。
ただ、jsonを知っておいたほうが今後のためになりそうなので、jsonを使います。

Python公式 pickleドキュメント:

https://docs.python.org/ja/3/library/pickle.html

Python公式 jsonドキュメント:

https://docs.python.org/ja/3/library/json.html

Pythonでのjson参考:

https://hibiki-press.tech/python/json/1633
https://note.nkmk.me/python-json-load-dump/

svm.save('harupan_data/harupan_svm.dat')

生成されたデータの中身はこんな感じ。
テキストで、設定した内容も含めてSVMの情報が全部入っています。

%YAML:1.0
---
opencv_ml_svm:
   format: 3
   svmType: C_SVC
   kernel:
      type: LINEAR
   C: 100.
   term_criteria: { epsilon:1.1920928955078125e-07, iterations:1000 }
   var_count: 5
   class_count: 6
   class_labels: !!opencv-matrix
      rows: 6
      cols: 1
      dt: i
      data: [ -1, 0, 1, 2, 3, 5 ]
   sv_total: 15
   support_vectors:
      - [ -1.80685959e+01, -5.69025517e+00, 1.04596977e+01,
          7.75204945e+00, -2.17070246e+00 ]
      - [ 1.23291440e-01, -1.82865601e+01, -6.46431494e+00,
          4.22870070e-01, 6.76044846e+00 ]

...

      - [ -2.71152169e-01, -2.04700381e-01, 1.73537850e+00,
          3.85429978e+00, -5.29199266e+00 ]
   uncompressed_sv_total: 38
   uncompressed_support_vectors:
      - [ 8.49453330e-01, 7.48484850e-01, 6.99999988e-01, 6.75757587e-01,
          7.39393950e-01 ]
      - [ 8.43867898e-01, 7.64705896e-01, 6.90823317e-01, 7.08056450e-01,
          7.38181829e-01 ]
          
...
          
      - [ 7.79716969e-01, 6.48148119e-01, 7.35420227e-01, 7.86544859e-01,
          8.25454533e-01 ]
   decision_functions:
      -
         sv_count: 1
         rho: -9.4865848064106029e+00
         alpha: [ 1. ]
         index: [ 0 ]
      -
         sv_count: 1
         rho: -1.5486015742808821e+01
         alpha: [ 1. ]
         index: [ 1 ]
      -
         sv_count: 1
         rho: 5.0411393634237656e-01
         alpha: [ 1. ]
         index: [ 2 ]
         
...
         
      -
         sv_count: 1
         rho: 2.3617392745496142e+00
         alpha: [ 1. ]
         index: [ 13 ]
      -
         sv_count: 1
         rho: 1.9971712154702148e-01
         alpha: [ 1. ]
         index: [ 14 ]

保存データを読み込んで確認してみます。
Pythonでのモデル読み込みは以下を参照。

https://qiita.com/color_box/items/6f7d06fc3d65c6913ebf

svm_restored = cv2.ml.SVM_load('harupan_data/harupan_svm.dat')
result = svm_restored.predict(np.array(svm_inputs, 'float32'))
print_stat(result, svm_labels)
label -1 : {-1: 75, 0: 13, 1:  0, 2:  0, 3:  0, 5:  1, }
label  0 : {-1:  5, 0: 22, 1:  0, 2:  0, 3:  0, 5:  0, }
label  1 : {-1:  0, 0:  0, 1: 78, 2:  0, 3:  0, 5:  0, }
label  2 : {-1:  0, 0:  0, 1:  0, 2: 39, 3:  0, 5:  0, }
label  3 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  2, 5:  0, }
label  5 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  0, 5: 29, }

前と同じ結果が出ているので、問題なし。

テンプレートデータの保存(json)

テンプレートデータのjson保存のほうは、ndarrayがそのままでは扱えなかったので、リストに一回変換してから保存します。
読み込みのときに、ndarrayに戻します。

また、辞書データはjson形式と相性がいいようなので、これも組み合わせ。

https://wtnvenga.hatenablog.com/entry/2018/05/27/113848
https://note.nkmk.me/python-numpy-list/

テンプレートデータは、輪郭データと選択点リストを保存しておくこととします。
下準備の段階で、これらのデータを上に書いたクラスに与えてインスタンスを生成します。

import json

# ctr_list: List of contours for (0, 1, 2, 3, 5)
# pts_idx_list: List of selected point indices for (1, 2, 3, 5)
def save_templates(filename, ctr_list, pts_idx_list):
    with open(filename, mode='w') as f:
        save_data = []
        save_data += [{'num': 0, 'ctr': ctr_list[0].tolist(), 'pts': [0]}]
        for num, ctr, pts_idx in zip([1,2,3,5], ctr_list[1:5], pts_idx_list):
            save_data += [{'num': num, 'ctr': ctr.tolist(), 'pts': pts_idx}]
        json.dump(save_data, f, indent=2)
    return

def load_templates(filename):
    with open(filename, mode='r') as f:
        load_data = json.load(f)
        templates_rtn = []
        for d in load_data:
            templates_rtn += [template_dataset(np.array(d['ctr']), d['num'], d['pts'])]
    return templates_rtn
save_templates('harupan_data/templates2019.json', subctrs1, subctrs1_selected_pts)
save_templates('harupan_data/templates2020.json', subctrs3, subctrs3_selected_pts)
save_templates('harupan_data/templates2021.json', subctrs5, subctrs5_selected_pts)

生成されたデータは以下のような形。

[
  {
    "num": 0,
    "ctr": [
      [
        [
          18,
          52
        ]
      ],
      [
        [
          16,
          52
        ]
      ],
      [
        [
          15,
          52
        ]
      ],

...

      [
        [
          19,
          52
        ]
      ],
      [
        [
          19,
          52
        ]
      ]
    ],
    "pts": [
      0
    ]
  },
  {
    "num": 1,
    "ctr": [
      [
        [
          0,
          2
        ]
      ],
      [
        [
          1,
          0
        ]
      ],
      [
        [
          1,
          0
        ]
      ],

...

      [
        [
          0,
          4
        ]
      ],
      [
        [
          0,
          3
        ]
      ]
    ],
    "pts": [
      0,
      5,
      10,
      15,
      20,
      25,
      30,
      35,
      40,
      45,
      50,
      55,
      60,
      65,
      70,
      75,
      80,
      85,
      90,
      95,
      100,
      105,
      110,
      115,
      120,
      125,
      130,
      135,
      140
    ]
  },
  {
    "num": 2,
    "ctr": [
      [
        [
          12,
          1
        ]
      ],
      [
        [
          13,
          0
        ]
      ],

...

分かりやすくはないですが、直接見たいわけではないので特に問題なし。
テキスト保存なので無駄にデータ量が多くなりますが…。

きちんと復元できることも確認。

templates1_restored = load_templates('harupan_data/templates2019.json')
templates3_restored = load_templates('harupan_data/templates2020.json')
templates5_restored = load_templates('harupan_data/templates2021.json')

def disp_template(template):
    img = cv2.cvtColor(template.solid.copy(), cv2.COLOR_GRAY2RGB)
    if template.num != 0:
        img = cv2.drawContours(img, [template.ctr], -1, (0,255,0), 1)
        for p in template.pts:
            img = cv2.drawMarker(img, p, (255,0,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.imshow(img), plt.xticks([]), plt.yticks([])
    plt.show()

print('Template 2019')
for t in templates1_restored:
    disp_template(t)

print('Template 2020')
for t in templates3_restored:
    disp_template(t)

print('Template 2021')
for t in templates5_restored:
    disp_template(t)
    Template 2019

f:id:nokixa:20220327233813p:plain

f:id:nokixa:20220327233815p:plain

f:id:nokixa:20220327233818p:plain

f:id:nokixa:20220327233820p:plain

f:id:nokixa:20220327233823p:plain

    Template 2020

f:id:nokixa:20220327233825p:plain

f:id:nokixa:20220327233827p:plain

f:id:nokixa:20220327233830p:plain

f:id:nokixa:20220327233832p:plain

f:id:nokixa:20220327233834p:plain

    Template 2021

f:id:nokixa:20220327233837p:plain

f:id:nokixa:20220327233839p:plain

f:id:nokixa:20220327233841p:plain

f:id:nokixa:20220327233843p:plain

f:id:nokixa:20220327233846p:plain

大丈夫そう。

スクリプト

今までの処理を、pythonスクリプトとして保存、これを読み込めば必要な関数がロードされるようにしておきたいなと。

以下のスクリプト(harupan.py)を作成。
これでJupyter notebookがすっきりするか。


######################################################
# Importing libraries
######################################################
import cv2
import numpy as np
from matplotlib import pyplot as plt
import math
import copy
import random
import json

######################################################
# Detecting contours
######################################################
def detect_candidate_contours(image, res_th=800):
    h, w, chs = image.shape
    if h > res_th or w > res_th:
        k = float(res_th)/h if w > h else float(res_th)/w
    else:
        k = 1.0
    img = cv2.resize(image, None, fx=k, fy=k, interpolation=cv2.INTER_AREA)
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    # Convert hue value (rotation, mask by saturation)
    hsv[:,:,0] = np.where(hsv[:,:,0] < 50, hsv[:,:,0]+180, hsv[:,:,0])
    hsv[:,:,0] = np.where(hsv[:,:,1] < 100, 0, hsv[:,:,0])
    # Thresholding with cv2.inRange()
    th_hue = cv2.inRange(hsv[:,:,0], 135, 190)
    # Retrieve all points on the contours (cv2.CHAIN_APPROX_NONE)
    contours, hierarchy = cv2.findContours(th_hue, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    indices0 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] == -1]
    indices1 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] in indices0]
    contours1 = [contours[i] for i in indices1]
    contours1_filtered = [ctr for ctr in contours1 if cv2.contourArea(ctr) > float(res_th)*float(res_th)/4000]
    return contours1_filtered, img


######################################################
# Auxiliary functions
######################################################
def create_contour_area_image(img, ctr):
    x,y,w,h = cv2.boundingRect(ctr)
    rtn_img = img[y:y+h,x:x+w,:].copy()
    rtn_ctr = ctr.copy()
    origin = np.array([x,y])
    for c in rtn_ctr:
        c[0,:] -= origin
    return rtn_img, rtn_ctr

# ctr: Should be output of create_contour_area_image() (Origin of points is the origin of bounding box)
# img_shape: Optional, tuple of (image_height, image_width), if omitted, calculated from ctr
def create_solid_contour(ctr, img_shape=(int(0),int(0))):
    if img_shape == (int(0),int(0)):
        _,_,w,h = cv2.boundingRect(ctr)
    else:
        h,w = img_shape
    img = np.zeros((h,w), 'uint8')
    img = cv2.drawContours(img, [ctr], -1, 255, -1)
    return img

# ctr: Should be output of create_contour_area_image() (Origin of points is the origin of bounding box)
def create_upright_solid_contour(ctr):
    (cx,cy),(w,h),angle = cv2.minAreaRect(ctr)
    M = cv2.getRotationMatrix2D((cx,cy), angle, 1)
    for i in range(ctr.shape[0]):
        ctr[i,0,:] = ( M @ np.array([ctr[i,0,0], ctr[i,0,1], 1]) ).astype('int')
    rect = cv2.boundingRect(ctr)
    img = np.zeros((rect[3],rect[2]), 'uint8')
    ctr -= rect[0:2]
    M[:,2] -= rect[0:2]
    img = cv2.drawContours(img, [ctr], -1, 255,-1)
    return img, M, ctr


######################################################
# Dataset classes
######################################################
class contour_dataset:
    def __init__(self, ctr):
        self.ctr = ctr.copy()
        self.rrect = cv2.minAreaRect(ctr)
        self.box = cv2.boxPoints(self.rrect)
        self.solid = create_solid_contour(ctr)
        self.pts = np.array([p for p in ctr[:,0,:]])

class template_dataset:
    def __init__(self, ctr, num, selected_idx=[0]):
        self.ctr = ctr.copy()
        self.num = num
        self.rrect = cv2.minAreaRect(ctr)
        self.box = cv2.boxPoints(self.rrect)
        if num == 0:
            self.solid,_,_ = create_upright_solid_contour(ctr)
        else:
            self.solid = create_solid_contour(ctr)
        self.pts = np.array([ctr[idx,0,:] for idx in selected_idx])


######################################################
# ICP
######################################################
# pts: list of 2D points, or ndarray of shape (n,2)
# query: 2D point to find nearest neighbor
def find_nearest_neighbor(pts, query):
    min_distance_sq = float('inf')
    min_idx = 0
    for i, p in enumerate(pts):
        d = np.dot(query - p, query - p)
        if(d < min_distance_sq):
            min_distance_sq = d
            min_idx = i
    return min_idx, np.sqrt(min_distance_sq)

# src, dst: ndarray, shape is (n,2) (n: number of points)
def estimate_affine_2d(src, dst):
    n = min(src.shape[0], dst.shape[0])
    x = dst[0:n].flatten()
    A = np.zeros((2*n,6))
    for i in range(n):
        A[i*2,0] = src[i,0]
        A[i*2,1] = src[i,1]
        A[i*2,2] = 1
        A[i*2+1,3] = src[i,0]
        A[i*2+1,4] = src[i,1]
        A[i*2+1,5] = 1
    M = np.linalg.inv(A.T @ A) @ A.T @ x
    return M.reshape([2,3])

# Find optimum affine matrix using ICP algorithm
# src_pts: ndarray, shape is (n_s,2) (n_s: number of points)
# dst_pts: ndarray, shape is (n_d,2) (n_d: number of points, n_d should be larger or equal to n_s)
# initial_matrix: ndarray, shape is (2,3)
def icp(src_pts, dst_pts, max_iter=20, initial_matrix=np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])):
    default_affine_matrix = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
    if dst_pts.shape[0] < src_pts.shape[0]:
        print("icp: Insufficient destination points")
        return default_affine_matrix, False
    if initial_matrix.shape != (2,3):
        print("icp: Illegal shape of initial_matrix")
        return default_affine_matrix, False
    M = initial_matrix
    # Store indices of the nearest neighbor point of dst_pts to the converted point of src_pts
    nn_idx = []
    for i in range(max_iter):
        nn_idx_tmp = []
        dst_pts_list = [p for p in dst_pts]
        idx_list = list(range(0,dst_pts.shape[0]))
        for p in src_pts:
            p2 = M @ np.array([p[0], p[1], 1])
            idx, d = find_nearest_neighbor(dst_pts_list, p2)
            nn_idx_tmp += [idx_list[idx]]
            del dst_pts_list[idx]
            del idx_list[idx]
        if nn_idx != [] and nn_idx == nn_idx_tmp:
            break
        dst_pts2 = np.zeros_like(src_pts)
        for j,idx in enumerate(nn_idx_tmp):
            dst_pts2[j,:] = dst_pts[idx,:]
        M = estimate_affine_2d(src_pts, dst_pts2)
        nn_idx = nn_idx_tmp
        if i == max_iter -1:
            return M, False
    return M, True


######################################################
# Calculating similarity and determining the number
######################################################
def binary_image_similarity(img1, img2):
    if img1.shape != img2.shape:
        print('binary_image_similarity: Different image size')
        return 0.0
    xor_img = cv2.bitwise_xor(img1, img2)
    return 1.0 - np.float(np.count_nonzero(xor_img)) / (img1.shape[0]*img2.shape[1])

# src, dst: contour_dataset or template_dataset (holding member variables box, solid)
def get_transform_by_rotated_rectangle(src, dst):
    # Rotated patterns are created when starting index is slided
    dst_box2 = np.vstack([dst.box, dst.box])
    max_similarity = 0.0
    max_converted_img = np.zeros((dst.solid.shape[1], dst.solid.shape[0]), 'uint8')
    for i in range(4):
        M = cv2.getAffineTransform(src.box[0:3], dst_box2[i:i+3])
        converted_img = cv2.warpAffine(src.solid, M, dsize=(dst.solid.shape[1], dst.solid.shape[0]), flags=cv2.INTER_NEAREST)
        similarity = binary_image_similarity(converted_img, dst.solid)
        if similarity > max_similarity:
            M_rtn = M
            max_similarity = similarity
            max_converted_img = converted_img
    return M_rtn, max_similarity, max_converted_img

def get_similarity_with_template(target_data, template_data, sim_th_high=0.95, sim_th_low=0.7):
    _,(w1,h1), _ = target_data.rrect
    _,(w2,h2), _ = template_data.rrect
    r = w1/h1 if w1 < h1 else h1/w1
    r = r * h2/w2 if w2 < h2 else r * w2/h2
    M, sim_init, _ = get_transform_by_rotated_rectangle(template_data, target_data)
    if sim_init > sim_th_high or sim_init < sim_th_low or r > 1.4 or r < 0.7:
        dsize = (template_data.solid.shape[1], template_data.solid.shape[0])
        flags = cv2.INTER_NEAREST|cv2.WARP_INVERSE_MAP
        converted_img = cv2.warpAffine(target_data.solid, M, dsize=dsize, flags=flags)
        return sim_init, converted_img
    M, _ = icp(template_data.pts, target_data.pts, initial_matrix=M)
    Minv = cv2.invertAffineTransform(M)
    converted_ctr = np.zeros_like(target_data.ctr)
    for i in range(target_data.ctr.shape[0]):
        converted_ctr[i,0,:] = (Minv[:,0:2] @ target_data.ctr[i,0,:]) + Minv[:,2]
    converted_img = create_solid_contour(converted_ctr, img_shape=template_data.solid.shape)
    val = binary_image_similarity(converted_img, template_data.solid)
    return val, converted_img

def get_similarity_with_template_zero(target_data, template_data):
    dsize = (template_data.solid.shape[1], template_data.solid.shape[0])
    converted_img = cv2.resize(target_data.solid, dsize=dsize, interpolation=cv2.INTER_NEAREST)
    val = binary_image_similarity(converted_img, template_data.solid)
    return val, converted_img

def get_similarities(target, templates):
    similarities = []
    converted_imgs = []
    for tmpl in templates:
        if tmpl.num == 0:
            sim,converted_img = get_similarity_with_template_zero(target, tmpl)
        else:
            sim,converted_img = get_similarity_with_template(target, tmpl)
        similarities += [sim]
        converted_imgs += [converted_img]
    return similarities, converted_imgs

# target: Single contour to compare
# templates: List of template_dataset (for numbers 0, 1, 2, 3, 5)
# svm: Trained SVM
# return: determined number (0,1,2,3,5), -1 if none corresponds
def determine_number(target, templates, svm):
    similarities,_ = get_similarities(target, templates)
    _, result = svm.predict(np.array(similarities))
    return int(result[0])


######################################################
# Loading template data and SVM model
######################################################
def load_svm(filename):
    return cv2.ml.SVM_load(filename)

def load_templates(filename):
    with open(filename, mode='r') as f:
        load_data = json.load(f)
        templates_rtn = []
        for d in load_data:
            templates_rtn += [template_dataset(np.array(d['ctr']), d['num'], d['pts'])]
    return templates_rtn

ここまで

春のパン祭りシール点数集計ではまだやらないといけないことがあって、

  • 点数計算の処理
  • 1枚の入力画像を受けてから点数を計算するまでの一連の流れ

というところですが、また次回にします。

OpenCVやってみる - 37. 処理の調整

春のパン祭り点数計算は前回までの処理でおおよそできるようになりましたが、少し調整していきます。

  • ICP収束条件
  • 初期変換行列での判定

文章を整える気があまりなく…
以下雑な文章ご容赦で。

下準備

Jupyter notebookを改めて作っているので、始めに必要な処理を再度行います。

  • ライブラリインポート
  • 画像データ読み込み
  • 必要な関数の定義(今回は点数文字候補輪郭の検出)
  • 点数文字テンプレートのデータ準備

いらなさそうなデバッグ表示などは削除しておきます。

スクリプトにまとめておいたほうがいいかな…

ライブラリインポート、画像データ読み込み

import cv2
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import math

img1 = cv2.imread('harupan_190428_1.jpg')
img2 = cv2.imread('harupan_190428_2.jpg')
img3 = cv2.imread('harupan_200317_1.jpg')
img4 = cv2.imread('harupan_210227_2.jpg')
img5 = cv2.imread('harupan_210402_1.jpg')
img6 = cv2.imread('harupan_210402_2.jpg')
img7 = cv2.imread('harupan_210414_1.jpg')

点数文字候補輪郭の検出

def detect_candidate_contours(image, res_th=800):
    h, w, chs = image.shape
    if h > res_th or w > res_th:
        k = float(res_th)/h if w > h else float(res_th)/w
    else:
        k = 1.0
    img = cv2.resize(image, None, fx=k, fy=k, interpolation=cv2.INTER_AREA)
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    # Convert hue value (rotation, mask by saturation)
    hsv[:,:,0] = np.where(hsv[:,:,0] < 50, hsv[:,:,0]+180, hsv[:,:,0])
    hsv[:,:,0] = np.where(hsv[:,:,1] < 100, 0, hsv[:,:,0])
    # Thresholding with cv2.inRange()
    th_hue = cv2.inRange(hsv[:,:,0], 135, 190)
    # Retrieve all points on the contours (cv2.CHAIN_APPROX_NONE)
    contours, hierarchy = cv2.findContours(th_hue, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    indices0 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] == -1]
    indices1 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] in indices0]
    contours1 = [contours[i] for i in indices1]
    contours1_filtered = [ctr for ctr in contours1 if cv2.contourArea(ctr) > float(res_th)*float(res_th)/4000]
    return contours1_filtered, img

補助処理

  • 輪郭周辺の小画像作成
    引数として、輪郭のリストと対象輪郭のインデックスを用意していたが、インデックスは単にリストの要素選択に使うだけなので、対象輪郭自体を引数にするよう変更
  • 輪郭の塗りつぶし画像作成
def create_contour_area_image(img, ctr):
    x,y,w,h = cv2.boundingRect(ctr)
    rtn_img = img[y:y+h,x:x+w,:].copy()
    rtn_ctr = ctr.copy()
    origin = np.array([x,y])
    for c in rtn_ctr:
        c[0,:] -= origin
    return rtn_img, rtn_ctr

# ctr: Should be output of create_contour_area_image() (Origin of points is the origin of bounding box)
# img_shape: Optional, tuple of (image_height, image_width), if omitted, calculated from ctr
def create_solid_contour(ctr, img_shape=(int(0),int(0))):
    if img_shape == (int(0),int(0)):
        _,_,w,h = cv2.boundingRect(ctr)
    else:
        h,w = img_shape
    img = np.zeros((h,w), 'uint8')
    img = cv2.drawContours(img, [ctr], -1, 255, -1)
    return img

# ctr: Should be output of create_contour_area_image() (Origin of points is the origin of bounding box)
# img_shape: Optional, tuple of (image_height, image_width), determined from fitted ellipse if omitted
def create_upright_solid_contour(ctr,img_shape=(int(0),int(0))):
    (cx,cy),(w,h),angle = cv2.fitEllipse(ctr)
    if img_shape == (int(0),int(0)):
        # Default: same as fitted ellipse
        img_shape = (math.ceil(w), math.ceil(h))
    ctr_img = create_solid_contour(ctr)
    Mrot = cv2.getRotationMatrix2D((cx,cy), angle, 1)
    Mrot[0,2] -= cx - w/2
    Mrot[1,2] -= cy - h/2
    rotated_ctr_img = cv2.warpAffine(ctr_img, Mrot, dsize=img_shape, flags=cv2.INTER_NEAREST)
    return rotated_ctr_img

輪郭データ取得

各輪郭周辺の切り出し画像、および原点を変更した輪郭データも用意

imgs = [img1, img2, img3, img4, img5, img6, img7]
resized_imgs = []
ctrs_all = []
subctrs_all = []
subimgs_all = []
for img in imgs:
    ctrs, im = detect_candidate_contours(img)
    resized_imgs += [im]
    ctrs_all += [ctrs]
    
    subctrs = []
    subimgs = []
    for ctr in ctrs:
        subimg,subctr = create_contour_area_image(im, ctr)
        subctrs += [subctr]
        subimgs += [subimg]
    subctrs_all += [subctrs]
    subimgs_all += [subimgs]

テンプレートデータ

ctrs1_idx_zero = 26
ctrs1_idx_one = 27
ctrs1_idx_two = 24
ctrs1_idx_three = 33
ctrs1_idx_five = 8
ctrs1_idx_numbers = [ctrs1_idx_zero, ctrs1_idx_one, ctrs1_idx_two, ctrs1_idx_three, ctrs1_idx_five]

subimgs1 = []
subctrs1 = []
binimgs1 = []
for i,idx in enumerate(ctrs1_idx_numbers):
    img, ctrs = create_contour_area_image(resized_imgs[0], ctrs_all[0][idx])
    if i == 0:
        binimg = create_upright_solid_contour(ctrs)
    else:
        binimg = create_solid_contour(ctrs)
    subimgs1 += [img.copy()]
    subctrs1 += [ctrs.copy()]
    binimgs1 += [binimg.copy()]
    ctr_img = cv2.drawContours(img, [ctrs], -1, (0,255,0), 2)
    plt.subplot(2,5,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(2,5,6+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220319233844p:plain

ctrs3_idx_zero = 7
ctrs3_idx_one = 4
ctrs3_idx_two = 17
ctrs3_idx_five = 6
ctrs3_idx_numbers = [ctrs3_idx_zero, ctrs3_idx_one, ctrs3_idx_two, ctrs3_idx_five]

subimgs3 = []
subctrs3 = []
binimgs3 = []
for i,idx in enumerate(ctrs3_idx_numbers):
    img, ctrs = create_contour_area_image(resized_imgs[2], ctrs_all[2][idx])
    if i == 0:
        binimg = create_upright_solid_contour(ctrs)
    else:
        binimg = create_solid_contour(ctrs)
    subimgs3 += [img.copy()]
    subctrs3 += [ctrs.copy()]
    binimgs3 += [binimg.copy()]
    ctr_img = cv2.drawContours(img, [ctrs], -1, (0,255,0), 2)
    plt.subplot(2,4,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(2,4,5+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()

subimgs3.insert(3, subimgs1[3])
subctrs3.insert(3, subctrs1[3])
binimgs3.insert(3, binimgs1[3])

f:id:nokixa:20220319233847p:plain

ctrs5_idx_zero = 3
ctrs5_idx_one = 4
ctrs5_idx_two = 2
ctrs5_idx_five = 5
ctrs5_idx_numbers = [ctrs5_idx_zero, ctrs5_idx_one, ctrs5_idx_two, ctrs5_idx_five]

subimgs5 = []
subctrs5 = []
binimgs5 = []
for i,idx in enumerate(ctrs5_idx_numbers):
    img, ctrs = create_contour_area_image(resized_imgs[4], ctrs_all[4][idx])
    if i == 0:
        binimg = create_upright_solid_contour(ctrs)
    else:
        binimg = create_solid_contour(ctrs)
    subimgs5 += [img.copy()]
    subctrs5 += [ctrs.copy()]
    binimgs5 += [binimg.copy()]
    ctr_img = cv2.drawContours(img, [ctrs], -1, (0,255,0), 2)
    plt.subplot(2,4,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(2,4,5+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()

subimgs5.insert(3, subimgs1[3])
subctrs5.insert(3, subctrs1[3])
binimgs5.insert(3, binimgs1[3])

f:id:nokixa:20220319233850p:plain

テンプレート輪郭点選択

subctrs1_selected_pts_one = [i for i in range(subctrs1[1].shape[0]) if i % 5 == 0]
subctrs1_selected_pts_two = [i for i in range(subctrs1[2].shape[0]) if i % 5 == 0]
subctrs1_selected_pts_three = [i for i in range(subctrs1[3].shape[0]) if i % 5 == 0]
subctrs1_selected_pts_five = [i for i in range(subctrs1[4].shape[0]) if i % 5 == 0]

subctrs1_selected_pts = [subctrs1_selected_pts_one, subctrs1_selected_pts_two, subctrs1_selected_pts_three, subctrs1_selected_pts_five]
for i in range(4):
    img = subimgs1[i+1].copy()
    for p in subctrs1_selected_pts[i]:
        img = cv2.drawMarker(img, subctrs1[i+1][p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(1,4,1+i), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220319233853p:plain

subctrs3_selected_pts_one = [i for i in range(subctrs3[1].shape[0]) if i % 5 == 0]
subctrs3_selected_pts_two = [i for i in range(subctrs3[2].shape[0]) if i % 5 == 0]
subctrs3_selected_pts_three = [i for i in range(subctrs3[3].shape[0]) if i % 5 == 0]
subctrs3_selected_pts_five = [i for i in range(subctrs3[4].shape[0]) if i % 5 == 0]

subctrs3_selected_pts = [subctrs3_selected_pts_one, subctrs3_selected_pts_two, subctrs3_selected_pts_three, subctrs3_selected_pts_five]
for i in range(4):
    if subimgs3[i+1].shape == (1,):
        continue
    img = subimgs3[i+1].copy()
    for p in subctrs3_selected_pts[i]:
        img = cv2.drawMarker(img, subctrs3[i+1][p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(1,4,1+i), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220319233856p:plain

subctrs5_selected_pts_one = [i for i in range(subctrs5[1].shape[0]) if i % 5 == 0]
subctrs5_selected_pts_two = [i for i in range(subctrs5[2].shape[0]) if i % 5 == 0]
subctrs5_selected_pts_three = [i for i in range(subctrs5[3].shape[0]) if i % 5 == 0]
subctrs5_selected_pts_five = [i for i in range(subctrs5[4].shape[0]) if i % 5 == 0]

subctrs5_selected_pts = [subctrs5_selected_pts_one, subctrs5_selected_pts_two, subctrs5_selected_pts_three, subctrs5_selected_pts_five]
for i in range(4):
    if subimgs5[i+1].shape == (1,):
        continue
    img = subimgs5[i+1].copy()
    for p in subctrs5_selected_pts[i]:
        img = cv2.drawMarker(img, subctrs5[i+1][p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(1,4,1+i), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220319233858p:plain

各画像の正解ラベル

labels1 = [-1,-1,-1,-1,-1
           ,-1,5,0,5,1
           ,5,0,2,1,2
           ,-1,-1,1,1,5
           ,0,2,5,0,2
           ,5,0,1,2,-1
           ,5,1,2,3,1
           ,5,0,-1]

labels2 = [-1,-1,-1,-1,-1
           ,-1,5,0,5,1
           ,5,0,2,1,2
           ,-1,-1,1,1,5
           ,0,-1,2,5,0
           ,2,5,0,1,2
           ,5,0,1,-1,2
           ,-1,-1,-1,3,-1
           ,5,0,-1,1,-1
           ,-1]

labels3 = [-1,-1,-1,-1,1
           ,1,5,0,1,1
           ,5,0,5,0,-1
           ,-1,-1,2,-1,-1
           ,-1,1,1,1,-1
           ,1,-1,-1,1,1
           ,-1,2,-1,1,-1
           ,1,2,-1,1,-1
           ,-1,2,5,-1,0
           ,-1,1,1]

labels4 = [-1,-1,-1,-1,-1
           ,-1,-1,-1,-1,-1
           ,-1,-1,-1,-1,-1
           ,-1,-1,-1,1,1
           ,1,1,1,1,1
           ,-1,5,0,2,5
           ,0,2,1,2,2
           ,-1,-1,-1,1,1
           ,1]

labels5 = [-1,-1,2,0,1
           ,5,-1,1,1,1
           ,1,1,1,1,1
           ,1,-1,5,1,0
           ,5,1,2,0,5
           ,0,2,1,2,2
           ,-1,-1,1,1,1
           ]

labels6 = [-1,0,1,5,2
                ,-1,1,1,1,1
                ,5,1,0,5,0
                ,2,1,5,0,2
                ,2,2,1,-1,-1
                ,1,1,1,1,1
                ,1,1,1]

labels7 = [-1,-1,-1,-1,-1
           ,-1,1,2,2,2
           ,2,1,2,2,2
           ,1,-1,-1,-1,2
           ,1,2,1,1]

点数文字認識処理の整理

あまりいい関数の区切り方ができていなかったので、区切り直す

区切れる処理

以下の処理はICPアルゴリズムで使用するサブ処理、これは適切に区切れていると考える。

# pts: list of 2D points, or ndarray of shape (n,2)
# query: 2D point to find nearest neighbor
def find_nearest_neighbor(pts, query):
    min_distance = float('inf')
    min_idx = 0
    for i, p in enumerate(pts):
        d = np.linalg.norm(query - p)
        if(d < min_distance):
            min_distance = d
            min_idx = i
    return min_idx, min_distance

# src, dst: ndarray, shape is (n,2) (n: number of points)
def estimate_affine_2d(src, dst):
    n = min(src.shape[0], dst.shape[0])
    x = dst[0:n].flatten()
    A = np.zeros((2*n,6))
    for i in range(n):
        A[i*2,0] = src[i,0]
        A[i*2,1] = src[i,1]
        A[i*2,2] = 1
        A[i*2+1,3] = src[i,0]
        A[i*2+1,4] = src[i,1]
        A[i*2+1,5] = 1
    M = np.linalg.inv(A.T @ A) @ A.T @ x
    return M.reshape([2,3])

デバッグ用ICP

ICP処理は、途中経過を可視化できるように考える。
最適化中の変換行列で変換元の点を変換し、変換先の画像に重ねてみる。そのために変換後の点のリストを各反復ごとに残しておく。

# Find optimum affine matrix using ICP algorithm
# src_pts: ndarray, shape is (n_s,2) (n_s: number of points)
# dst_pts: ndarray, shape is (n_d,2) (n_d: number of points, n_d should be larger or equal to n_s)
# initial_matrix: ndarray, shape is (2,3)
def icp(src_pts, dst_pts, max_iter=100, initial_matrix=np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]]), debug=False):
    default_affine_matrix = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
    if dst_pts.shape[0] < src_pts.shape[0]:
        print("icp: Insufficient destination points")
        return default_affine_matrix, False
    if initial_matrix.shape != (2,3):
        print("icp: Illegal shape of initial_matrix")
        return default_affine_matrix, False
    M = initial_matrix
    # Store indices of the nearest neighbor point of dst_pts to the converted point of src_pts
    nn_idx = []
    converted_pts_list = []
    for i in range(max_iter):
        nn_idx_tmp = []
        dst_pts_list = [p for p in dst_pts]
        idx_list = list(range(0,dst_pts.shape[0]))
        if debug: converted_pts = [];
        for p in src_pts:
            p2 = M @ np.array([p[0], p[1], 1])
            idx, d = find_nearest_neighbor(dst_pts_list, p2)
            nn_idx_tmp += [idx_list[idx]]
            del dst_pts_list[idx]
            del idx_list[idx]
            if debug: converted_pts += [p2];
        if debug: converted_pts_list += [converted_pts];
        if nn_idx != [] and nn_idx == nn_idx_tmp:
            break
        dst_pts2 = np.zeros_like(src_pts)
        for j,idx in enumerate(nn_idx_tmp):
            dst_pts2[j,:] = dst_pts[idx,:]
        M = estimate_affine_2d(src_pts, dst_pts2)
        nn_idx = nn_idx_tmp
        if i == max_iter -1:
            return M, False, converted_pts_list
    return M, True, converted_pts_list

以下の処理は見直し

  • テンプレート、比較対象輪郭について、同じデータ生成が複数回行われるものがある。
    • 初期変換行列推定で、外接矩形(回転考慮)、塗りつぶし画像
    • ICPで、輪郭点の配列
    • 一致度計算で、テンプレートの塗りつぶし画像
    • "0"のテンプレートでは、垂直に回転した塗りつぶし画像のみあればよい。
    • データセットとして一度生成して、各関数に与えるようにする。
  • 初期変換行列推定(get_initial_trainsform()関数)
    上記のデータセットを引数とする。関数名も、"外接矩形による変換行列計算"と変更する。
  • get_optimum_transform()関数
    初期変換行列推定、ICPの関数を呼んでいるが、1か所でしか呼び出されていないので呼び出し元に展開する。
    また、暫定で初期変換行列での下側閾値(これを下回ったらICPを実施せずに諦める)を用意する。
  • 輪郭一致度計算処理(get_contours_similarity()関数)
    同じくデータセットを引数とする。get_optimum_transform()関数でやっていた処理を直接展開する。
  • 輪郭一致度計算("0"用、get_contours_similarity_zero()関数)
    こちらは内容の変更は不要だが、同じデータセットを引数とするように統一する。

データセットには、後で外接矩形の縦横比を見るのに必要になるので、外接矩形情報も含めておく。

まずはデータセットを用意する。クラスで実装する。

テンプレートについて、"0"とそれ以外で処理を変える必要がある。また、"0"では輪郭点データは不要。

class contour_dataset:
    def __init__(self, ctr):
        self.ctr = ctr.copy()
        self.rrect = cv2.minAreaRect(ctr)
        self.box = cv2.boxPoints(self.rrect)
        self.solid = create_solid_contour(ctr)
        self.pts = np.array([p for p in ctr[:,0,:]])

class template_dataset:
    def __init__(self, ctr, num, selected_idx=[0]):
        self.ctr = ctr.copy()
        self.num = num
        self.rrect = cv2.minAreaRect(ctr)
        self.box = cv2.boxPoints(self.rrect)
        if num == 0:
            self.solid = create_upright_solid_contour(ctr)
        else:
            self.solid = create_solid_contour(ctr)
        self.pts = np.array([ctr[idx,0,:] for idx in selected_idx])
# Prepare template data for "0"
templates1 = [template_dataset(subctrs1[0], 0)]
templates3 = [template_dataset(subctrs1[0], 0)]
templates5 = [template_dataset(subctrs1[0], 0)]
# Prepare template data for other numbers
numbers = [1, 2, 3, 5]
for i,num in enumerate(numbers):
    templates1 += [template_dataset(subctrs1[i+1], num, subctrs1_selected_pts[i])]
    templates3 += [template_dataset(subctrs3[i+1], num, subctrs3_selected_pts[i])]
    templates5 += [template_dataset(subctrs5[i+1], num, subctrs5_selected_pts[i])]
ctrs_all_datasets = [[contour_dataset(ctr) for ctr in ctrs] for ctrs in subctrs_all]

以下は修正した初期変換行列推定、輪郭一致度計算の処理。

# src, dst: contour_dataset or template_dataset (holding member variables box, solid)
def get_transform_by_rotated_rectangle(src, dst):
    # Rotated patterns are created when starting index is slided
    dst_box2 = np.vstack([dst.box, dst.box])
    max_similarity = 0.0
    max_converted_img = np.zeros((dst.solid.shape[1], dst.solid.shape[0]), 'uint8')
    for i in range(4):
        M = cv2.getAffineTransform(src.box[0:3], dst_box2[i:i+3])
        converted_img = cv2.warpAffine(src.solid, M, dsize=(dst.solid.shape[1], dst.solid.shape[0]), flags=cv2.INTER_NEAREST)
        similarity = cv2.matchTemplate(converted_img, dst.solid, cv2.TM_CCORR_NORMED)
        if similarity[0,0] > max_similarity:
            M_rtn = M
            max_similarity = similarity[0,0]
            max_converted_img = converted_img
    return M_rtn, max_similarity, converted_img

def get_similarity_with_template(target_data, template_data, sim_th_high=0.92, sim_th_low=0.7):
    M, sim_init, _ = get_transform_by_rotated_rectangle(template_data, target_data)
    if sim_init < sim_th_high and sim_init > sim_th_low:
        print('get_similarity_with_template: Execute ICP')
        M, _, _ = icp(template_data.pts, target_data.pts)
    Minv = cv2.invertAffineTransform(M)
    converted_ctr = np.zeros_like(target_data.ctr)
    for i in range(target_data.ctr.shape[0]):
        converted_ctr[i,0,:] = (Minv[:,0:2] @ target_data.ctr[i,0,:]) + Minv[:,2]
    converted_img = create_solid_contour(converted_ctr, img_shape=template_data.solid.shape)
    val = cv2.matchTemplate(converted_img, template_data.solid, cv2.TM_CCORR_NORMED)
    return val[0,0], converted_img

def get_similarity_with_template_zero(target_data, template_data):
    dsize = (template_data.solid.shape[1], template_data.solid.shape[0])
    converted_img = cv2.resize(target_data.solid, dsize=dsize, interpolation=cv2.INTER_NEAREST)
    val = cv2.matchTemplate(converted_img, template_data.solid, cv2.TM_CCORR_NORMED)
    return val[0,0], converted_img

後は数字判定処理、SVMを使った形で実装。SVMの学習は別途。
上のデータセットを使うことで記述が少し整理される。

def get_similarities(target, templates, debug_number=-1):
    similarities = []
    for i,tmpl in enumerate(templates):
        if tmpl.num == 0:
            sim, img = get_similarity_with_template_zero(target, tmpl)
        else:
            sim, img = get_similarity_with_template(target, tmpl)
        similarities += [sim]
        if debug_number == tmpl.num:
            dbg_img = img.copy()

    if not dbg_img:
        dbg_img = np.zeros((1,1), 'uint8')
    if debug_number != -1:
        return similarities, dbg_img
    else:
        return similarities

# target: Single contour to compare
# templates: List of template_dataset (for numbers 0, 1, 2, 3, 5)
# svm: Trained SVM
# debug_number: Optional, if specified, comparing image for the number is returned
# return: determined number (0,1,2,3,5), -1 if none corresponds
def determine_number(target, templates, svm, debug_number=-1):
    similarities, dbg_img = get_similarities(target, templates, debug_number)
    _, result = svm.predict(np.array(similarities))
    if debug_number != -1:
        return int(result[0]), similarities, dbg_img
    else:
        return int(result[0])

ICP収束条件検討

各画像、各輪郭、各テンプレートについて、ICP処理の経過を一部見てみる。
一致するテンプレートからの変換、一致しないテンプレートからの変換、あとは収束しない変換を見てみたい。

2つ目の画像で収束しないパターンがあったので、これで見てみます。

labels = [-1, 0, 1, 2, 3, 5]
labels_checked = {lab: False for lab in labels}

for i, target in enumerate(ctrs_all_datasets[1]):
    ctr_img = cv2.drawContours(subimgs_all[1][i].copy(), [subctrs_all[1][i]], -1, (0,255,0), 2)
    for tmpl in templates1[1:5]:
        M, sim_init = get_transform_by_rotated_rectangle(tmpl, target)
        M, result, converted_pts  = icp(tmpl.pts, target.pts, initial_matrix=M, debug=True)
        if labels_checked[labels2[i]] and result:
            continue
        print('ICP iterations: ', len(converted_pts))
        subx = min(10, len(converted_pts)) 
        suby = int(len(converted_pts) / 10) + (1 if len(converted_pts) % 10 else 0)
        plt.figure(figsize=(12.8,1.2*suby),dpi=100)
        plt.suptitle('Contour No. %d' %(i) + ', Label %d' %(labels2[i]) + ', Template %d' %(tmpl.num))
        for j,pts in enumerate(converted_pts):
            img = ctr_img.copy()
            for p in pts:
                img = cv2.drawMarker(img, (int(p[0]),int(p[1])), (0,0,255), markerType=cv2.MARKER_CROSS, markerSize=3)
            plt.subplot(suby,subx,int(j/subx)*subx+j%subx+1)
            plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB)),plt.xticks([]),plt.yticks([])
        plt.show()
    labels_checked[labels2[i]] = True
    ICP iterations:  12

f:id:nokixa:20220319233901p:plain

    ICP iterations:  7

f:id:nokixa:20220319233903p:plain

    ICP iterations:  8

f:id:nokixa:20220319233906p:plain

    ICP iterations:  15

f:id:nokixa:20220319233908p:plain

    ICP iterations:  100

f:id:nokixa:20220319233911p:plain

    ICP iterations:  6

f:id:nokixa:20220319233914p:plain

    ICP iterations:  6

f:id:nokixa:20220319233916p:plain

    ICP iterations:  6

f:id:nokixa:20220319233919p:plain

    ICP iterations:  16

f:id:nokixa:20220319233921p:plain

    ICP iterations:  12

f:id:nokixa:20220319233924p:plain

    ICP iterations:  12

f:id:nokixa:20220319233927p:plain

    ICP iterations:  7

f:id:nokixa:20220319233930p:plain

    ICP iterations:  8

f:id:nokixa:20220319233932p:plain

    ICP iterations:  2

f:id:nokixa:20220319233935p:plain

    ICP iterations:  8

f:id:nokixa:20220319233937p:plain

    ICP iterations:  7

f:id:nokixa:20220319233940p:plain

    ICP iterations:  11

f:id:nokixa:20220319233943p:plain

    ICP iterations:  7

f:id:nokixa:20220319233945p:plain

    ICP iterations:  3

f:id:nokixa:20220319233948p:plain

    ICP iterations:  10

f:id:nokixa:20220319233950p:plain

    ICP iterations:  7

f:id:nokixa:20220319233953p:plain

    ICP iterations:  12

f:id:nokixa:20220319233956p:plain

    ICP iterations:  7

f:id:nokixa:20220319233958p:plain

    ICP iterations:  4

f:id:nokixa:20220319234001p:plain

    ICP iterations:  7

f:id:nokixa:20220319234003p:plain

    ICP iterations:  100

f:id:nokixa:20220319234006p:plain

    ICP iterations:  100

f:id:nokixa:20220319234009p:plain

    ICP iterations:  100

f:id:nokixa:20220319234012p:plain

なんとなくICPの過程が見られました。
結構徐々に変化するので少しわかりにくい。

収束しないパターンは見ましたが、ほぼ同じ変換を繰り返すだけで、特に得られることはないですね…
全部非数字の輪郭で、サイズも小さい、というぐらいかな。

ICP処理変更

収束条件としてぱっと思い付いたのは、最近傍点距離の和をとってみる、というところ。
収束しなかったときは同じパターンの繰り返しになっていたので、最近傍点距離の和が増加してしまったらICP終了、というのでいいかと。
最近傍点距離の和がどうなるか、見てみます。

# Find optimum affine matrix using ICP algorithm
# src_pts: ndarray, shape is (n_s,2) (n_s: number of points)
# dst_pts: ndarray, shape is (n_d,2) (n_d: number of points, n_d should be larger or equal to n_s)
# initial_matrix: ndarray, shape is (2,3)
def icp(src_pts, dst_pts, max_iter=100, initial_matrix=np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]]), debug=False):
    default_affine_matrix = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
    if dst_pts.shape[0] < src_pts.shape[0]:
        print("icp: Insufficient destination points")
        return default_affine_matrix, False
    if initial_matrix.shape != (2,3):
        print("icp: Illegal shape of initial_matrix")
        return default_affine_matrix, False
    M = initial_matrix
    # Store indices of the nearest neighbor point of dst_pts to the converted point of src_pts
    nn_idx = []
    converted_pts_list = []
    nn_distances_list = []
    for i in range(max_iter):
        nn_idx_tmp = []
        dst_pts_list = [p for p in dst_pts]
        idx_list = list(range(0,dst_pts.shape[0]))
        if debug: converted_pts = []; nn_distances = 0.0;
        for p in src_pts:
            p2 = M @ np.array([p[0], p[1], 1])
            idx, d = find_nearest_neighbor(dst_pts_list, p2)
            nn_idx_tmp += [idx_list[idx]]
            del dst_pts_list[idx]
            del idx_list[idx]
            if debug: converted_pts += [p2]; nn_distances += d;
        if debug: converted_pts_list += [converted_pts]; nn_distances_list += [nn_distances];
        if nn_idx != [] and nn_idx == nn_idx_tmp:
            break
        dst_pts2 = np.zeros_like(src_pts)
        for j,idx in enumerate(nn_idx_tmp):
            dst_pts2[j,:] = dst_pts[idx,:]
        M = estimate_affine_2d(src_pts, dst_pts2)
        nn_idx = nn_idx_tmp
        if i == max_iter -1:
            return M, False, converted_pts_list, nn_distances_list
    return M, True, converted_pts_list, nn_distances_list
labels = [-1, 0, 1, 2, 3, 5]
labels_checked = {lab: False for lab in labels}

for i, target in enumerate(ctrs_all_datasets[1]):
    ctr_img = cv2.drawContours(subimgs_all[1][i].copy(), [subctrs_all[1][i]], -1, (0,255,0), 2)
    for tmpl in templates1[1:5]:
        M, sim_init = get_transform_by_rotated_rectangle(tmpl, target)
        M, result, converted_pts, nn_distances  = icp(tmpl.pts, target.pts, initial_matrix=M, debug=True)
        if labels_checked[labels2[i]] and result:
            continue
        print('ICP iterations: ', len(converted_pts))
        plt.figure(figsize=(3.2, 2.4),dpi=100)
        plt.suptitle('Contour No. %d' %(i) + ', Label %d' %(labels2[i]) + ', Template %d' %(tmpl.num))
        plt.plot(nn_distances)
        plt.show()
    labels_checked[labels2[i]] = True
    ICP iterations:  12

f:id:nokixa:20220319234016p:plain

    ICP iterations:  7

f:id:nokixa:20220319234018p:plain

    ICP iterations:  8

f:id:nokixa:20220319234021p:plain

    ICP iterations:  15

f:id:nokixa:20220319234023p:plain

    ICP iterations:  100

f:id:nokixa:20220319234026p:plain

    ICP iterations:  6

f:id:nokixa:20220319234028p:plain

    ICP iterations:  6

f:id:nokixa:20220319234030p:plain

    ICP iterations:  6

f:id:nokixa:20220319234033p:plain

    ICP iterations:  16

f:id:nokixa:20220319234035p:plain

    ICP iterations:  12

f:id:nokixa:20220319234038p:plain

    ICP iterations:  12

f:id:nokixa:20220319234040p:plain

    ICP iterations:  7

f:id:nokixa:20220319234043p:plain

    ICP iterations:  8

f:id:nokixa:20220319234045p:plain

    ICP iterations:  2

f:id:nokixa:20220319234048p:plain

    ICP iterations:  8

f:id:nokixa:20220319234051p:plain

    ICP iterations:  7

f:id:nokixa:20220319234053p:plain

    ICP iterations:  11

f:id:nokixa:20220319234056p:plain

    ICP iterations:  7

f:id:nokixa:20220319234059p:plain

    ICP iterations:  3

f:id:nokixa:20220319234102p:plain

    ICP iterations:  10

f:id:nokixa:20220319234105p:plain

    ICP iterations:  7

f:id:nokixa:20220319234107p:plain

    ICP iterations:  12

f:id:nokixa:20220319234110p:plain

    ICP iterations:  7

f:id:nokixa:20220319234112p:plain

    ICP iterations:  4

f:id:nokixa:20220319234114p:plain

    ICP iterations:  7

f:id:nokixa:20220319234117p:plain

    ICP iterations:  100

f:id:nokixa:20220319234119p:plain

    ICP iterations:  100

f:id:nokixa:20220319234122p:plain

    ICP iterations:  100

f:id:nokixa:20220319234124p:plain

概ね最近傍点距離和は反復ごとに下がっていく傾向ですが、増加することもあり。
収束条件として使うにはあんまりかな…

2より大きいサイクルでの発振が起きるパターンもあります。

きちんと収束しないパターンは初期変換行列の段階で弾くことを期待して、ICPでの収束条件の見直しは諦めることにします。
最大反復回数を小さく設定するぐらい。

初期変換行列での判定

外接矩形による変換行列でどこまでのことが分かるか、確認してみたいと思います。
考えたのは、

  • 一致度がどこまで出るか
    この変換だけで判断できればICPは不要に…
  • 外接矩形の縦横比の差で判断できないか?
    縦横比が大きく違えば、そもそも一致するテンプレートではないかなと。

データ確認

実際のデータから、初期変換行列での情報を集めます。

縦横比は、短辺/長辺で計算したいと思います。
cv2.minAreaRect()の"矩形サイズ"の返り値では、どちらが短辺、長辺なのか分からなかったので、比較してから縦横比を計算しています。
また、以下ではテンプレートの縦横比に対する比率を出しています。

templates1_ratios = []
for tmpl in templates1:
    _,(w,h),_ = tmpl.rrect
    templates1_ratios += [w/h] if w < h else [h/w]

templates3_ratios = []
for tmpl in templates1:
    _,(w,h),_ = tmpl.rrect
    templates3_ratios += [w/h] if w < h else [h/w]

templates5_ratios = []
for tmpl in templates1:
    _,(w,h),_ = tmpl.rrect
    templates5_ratios += [w/h] if w < h else [h/w]

print('template1_ratios: ', templates1_ratios)
print('template3_ratios: ', templates3_ratios)
print('template5_ratios: ', templates5_ratios)
templates_sel = [1, 1, 3, 5, 5, 5, 5]

def select_templates(i):
    if i == 1: return templates1
    elif i == 3: return templates3;
    else: return templates5;

initial_similarities_all = []
rect_ratios_all = []
for tsel,ctrs_datasets in zip(templates_sel, ctrs_all_datasets):
    templates = select_templates(tsel)    
    initial_similarities = []
    rect_ratios = []
    for target in ctrs_datasets:
        _,(w,h),_ = target.rrect
        rect_ratios += [w/h] if w < h else [h/w]
        sims = []
        for tmpl in templates:
            _, sim = get_transform_by_rotated_rectangle(tmpl, target)
            sims += [sim]
        initial_similarities += [sims]
    initial_similarities_all += [initial_similarities]
    rect_ratios_all += [rect_ratios]
template1_ratios:  [0.7247706648520233, 0.45857414752846004, 0.7967290110813836, 0.7636363636363637, 0.7428571034673234]
template3_ratios:  [0.7247706648520233, 0.45857414752846004, 0.7967290110813836, 0.7636363636363637, 0.7428571034673234]
template5_ratios:  [0.7247706648520233, 0.45857414752846004, 0.7967290110813836, 0.7636363636363637, 0.7428571034673234]
labs = labels1 + labels2 + labels3 + labels4 + labels5 + labels6 + labels7
label_colors = {-1:'black', 0:'brown', 1:'red', 2:'orange', 3:'khaki', 5:'green'}
numbers = [0, 1, 2, 3, 5]
for i,num in enumerate(numbers):
    ratios = []
    sims = []
    colors = []
    for j in range(7):
        if templates_sel[j] == 1:
            base_ratio = templates1_ratios[i]
        elif templates_sel[j] == 3:
            base_ratio = templates3_ratios[i]
        else:
            base_ratio = templates5_ratios[i]
        for r in rect_ratios_all[j]:
            ratios += [r/base_ratio]
        for s in initial_similarities_all[j]:
            sims += [s[i]]
    for lab in labs:
        colors += [label_colors[lab]]
    plt.title('Number: %d' %(numbers[i]))
    plt.scatter(ratios, sims, c=colors)
    plt.show()

f:id:nokixa:20220319234127p:plain

f:id:nokixa:20220319234129p:plain

f:id:nokixa:20220319234132p:plain

f:id:nokixa:20220319234134p:plain

f:id:nokixa:20220319234137p:plain

グラフマーカー色は以下を参照。

https://matplotlib.org/stable/gallery/color/named_colors.html

結果を見てみると、

  • 縦横比について、非数字の輪郭は幅広く分布、数字の輪郭はある程度固まっている
    • ある程度は縦横比で候補を絞ることはできそう
  • 各輪郭に対応する数字で、当然ながら縦横比がテンプレートのものに近い値となっている
  • この変換でもそれなりの一致度が出ているが、完全に識別できるほどではない

という感じです。

なんとなく閾値

外接矩形の縦横比と、この変換での一致度について、明らかに判断できるだろう、という閾値を決めます。

  • 縦横比: 点数シールを45°の角度から撮るところまで許容する、と考えます。すると、カメラを倒した方向に1/√2倍に縮小されます。ということで、各数字テンプレートと縦横比を比較して、0.7倍以下、1.4倍以上であれば明らかにその数字とは違うと判断できます。上のグラフを見ても、この閾値で問題なさそうです。
  • 一致度: "0"を除いては、一致度0.95以上であればその数字に一致、0.7以下であれば不一致、という判定ができそう。
    "0"はどうしようというところですが、実は点数計算には"0"の判定は不要なので、特に問題なし。

初期変換行列結果でSVM

上の変換で得られた一致度ベクトルで、SVMで識別できるかやってみます。
これで問題なければ、ICPは不要となります…

import copy
import random

svm_inputs = []
for sims in initial_similarities_all:
    svm_inputs += copy.deepcopy(sims)
svm_labels = copy.deepcopy(labs)

# Remove inadequate contour data in img1
del svm_inputs[30]
del svm_labels[30]

def get_random_sample(data_in, labels_in, selected_labels, n_samples):
    data_rtn = []
    labels_rtn = []
    for lab in selected_labels:
        samples = [d for i,d in enumerate(data_in) if labels_in[i]==lab]
        n = min(n_samples, len(samples))
        data_rtn += random.sample(samples, n)
        labels_rtn += [lab] * n
    return data_rtn, labels_rtn

train_data, train_labels = get_random_sample(svm_inputs, svm_labels, [-1,0,1,2,3,5], 10)

svm = cv2.ml.SVM_create()
svm.setKernel(cv2.ml.SVM_LINEAR)
svm.setType(cv2.ml.SVM_C_SVC)
svm.setC(100)
svm.setGamma(1)
svm.train(np.array(train_data, 'float32'), cv2.ml.ROW_SAMPLE, np.array(train_labels));

result = svm.predict(np.array(svm_inputs, 'float32'))

# Dictionary containing classified count for each number
svm_stats = {k:{k2:0 for k2 in [-1, 0, 1, 2, 3, 5]} for k in [-1, 0, 1, 2, 3, 5]}
for res, lab in zip(result[1], svm_labels):
    svm_stats[lab][int(res[0])] += 1
for k,v in svm_stats.items():
    print('label {:>2}'.format(k), ': {', end='')
    for k2,v2 in v.items():
        print('{}: {:>2}, '.format(k2,v2), end='')
    print('}')
label -1 : {-1: 75, 0:  9, 1:  2, 2:  0, 3:  0, 5:  3, }
label  0 : {-1:  0, 0: 27, 1:  0, 2:  0, 3:  0, 5:  0, }
label  1 : {-1:  3, 0:  0, 1: 75, 2:  0, 3:  0, 5:  0, }
label  2 : {-1:  0, 0:  0, 1:  0, 2: 39, 3:  0, 5:  0, }
label  3 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  2, 5:  0, }
label  5 : {-1:  2, 0:  0, 1:  0, 2:  0, 3:  0, 5: 27, }

まあまあいけてしまってる…
一応完璧ではないので、ICPをやる意義はあるかな…

縦横比をSVM入力に含めてみるとどうだろう。

svm_inputs = []
for i,sims in enumerate(initial_similarities_all):
    for j,sim in enumerate(sims):
        svm_inputs += [copy.copy(sim + [rect_ratios_all[i][j]])]
svm_labels = copy.deepcopy(labs)

# Remove inadequate contour data in img1
del svm_inputs[30]
del svm_labels[30]

train_data, train_labels = get_random_sample(svm_inputs, svm_labels, [-1,0,1,2,3,5], 10)

svm = cv2.ml.SVM_create()
svm.setKernel(cv2.ml.SVM_LINEAR)
svm.setType(cv2.ml.SVM_C_SVC)
svm.setC(100)
svm.setGamma(1)
svm.train(np.array(train_data, 'float32'), cv2.ml.ROW_SAMPLE, np.array(train_labels));

result = svm.predict(np.array(svm_inputs, 'float32'))

# Dictionary containing classified count for each number
svm_stats = {k:{k2:0 for k2 in [-1, 0, 1, 2, 3, 5]} for k in [-1, 0, 1, 2, 3, 5]}
for res, lab in zip(result[1], svm_labels):
    svm_stats[lab][int(res[0])] += 1
for k,v in svm_stats.items():
    print('label {:>2}'.format(k), ': {', end='')
    for k2,v2 in v.items():
        print('{}: {:>2}, '.format(k2,v2), end='')
    print('}')
label -1 : {-1: 69, 0: 11, 1:  4, 2:  0, 3:  0, 5:  5, }
label  0 : {-1:  0, 0: 27, 1:  0, 2:  0, 3:  0, 5:  0, }
label  1 : {-1:  3, 0:  0, 1: 75, 2:  0, 3:  0, 5:  0, }
label  2 : {-1:  0, 0:  0, 1:  0, 2: 39, 3:  0, 5:  0, }
label  3 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  2, 5:  0, }
label  5 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  0, 5: 29, }

それほど良くはなっていません。
※Jupyter notebook上では上のコードを何回かやっていて、トレーニングデータはランダムサンプルしているので、ときどき結果が良くなったりすることもありました。

処理修正

上で書いた処理の修正を適用して、改めてSVMでの文字判定をやってみたいと思います。

ついでに、

  • デバッグ用機能は外す
  • ICP反復回数のデフォルトは20ぐらいにする

という変更も入れます。

# Find optimum affine matrix using ICP algorithm
# src_pts: ndarray, shape is (n_s,2) (n_s: number of points)
# dst_pts: ndarray, shape is (n_d,2) (n_d: number of points, n_d should be larger or equal to n_s)
# initial_matrix: ndarray, shape is (2,3)
def icp(src_pts, dst_pts, max_iter=20, initial_matrix=np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])):
    default_affine_matrix = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
    if dst_pts.shape[0] < src_pts.shape[0]:
        print("icp: Insufficient destination points")
        return default_affine_matrix, False
    if initial_matrix.shape != (2,3):
        print("icp: Illegal shape of initial_matrix")
        return default_affine_matrix, False
    M = initial_matrix
    # Store indices of the nearest neighbor point of dst_pts to the converted point of src_pts
    nn_idx = []
    for i in range(max_iter):
        nn_idx_tmp = []
        dst_pts_list = [p for p in dst_pts]
        idx_list = list(range(0,dst_pts.shape[0]))
        for p in src_pts:
            p2 = M @ np.array([p[0], p[1], 1])
            idx, d = find_nearest_neighbor(dst_pts_list, p2)
            nn_idx_tmp += [idx_list[idx]]
            del dst_pts_list[idx]
            del idx_list[idx]
        if nn_idx != [] and nn_idx == nn_idx_tmp:
            break
        dst_pts2 = np.zeros_like(src_pts)
        for j,idx in enumerate(nn_idx_tmp):
            dst_pts2[j,:] = dst_pts[idx,:]
        M = estimate_affine_2d(src_pts, dst_pts2)
        nn_idx = nn_idx_tmp
        if i == max_iter -1:
            return M, False
    return M, True

def get_similarity_with_template(target_data, template_data, sim_th_high=0.95, sim_th_low=0.7):
    _,(w1,h1), _ = target_data.rrect
    _,(w2,h2), _ = template_data.rrect
    r = w1/h1 if w1 < h1 else h1/w1
    r = r * h2/w2 if w2 < h2 else r * w2/h2
    M, sim_init, converted_img = get_transform_by_rotated_rectangle(template_data, target_data)
    if sim_init > sim_th_high or sim_init < sim_th_low or r > 1.4 or r < 0.7:
        return sim_init, converted_img
    M, _ = icp(template_data.pts, target_data.pts, initial_matrix=M)
    Minv = cv2.invertAffineTransform(M)
    converted_ctr = np.zeros_like(target_data.ctr)
    for i in range(target_data.ctr.shape[0]):
        converted_ctr[i,0,:] = (Minv[:,0:2] @ target_data.ctr[i,0,:]) + Minv[:,2]
    converted_img = create_solid_contour(converted_ctr, img_shape=template_data.solid.shape)
    val = cv2.matchTemplate(converted_img, template_data.solid, cv2.TM_CCORR_NORMED)
    return val[0,0], converted_img

def get_similarities(target, templates):
    similarities = []
    converted_imgs = []
    for tmpl in templates:
        if tmpl.num == 0:
            sim,converted_img = get_similarity_with_template_zero(target, tmpl)
        else:
            sim,converted_img = get_similarity_with_template(target, tmpl)
        similarities += [sim]
        converted_imgs += [converted_img]
    return similarities, converted_imgs

# target: Single contour to compare
# templates: List of template_dataset (for numbers 0, 1, 2, 3, 5)
# svm: Trained SVM
# return: determined number (0,1,2,3,5), -1 if none corresponds
def determine_number(target, templates, svm):
    similarities,_ = get_similarities(target, templates)
    _, result = svm.predict(np.array(similarities))
    return int(result[0])
templates_sel = [1, 1, 3, 5, 5, 5, 5]
similarities_all = []
converted_imgs_all = []
templates_sel_all = []
for i,(tsel,ctrs_datasets) in enumerate(zip(templates_sel, ctrs_all_datasets)):
    print('Dataset No. ', i)
    print('  Contour No. ', end='')
    templates = select_templates(tsel)
    for j,target in enumerate(ctrs_datasets):
        print(j, ' ', end='')
        sims, imgs = get_similarities(target, templates)
        similarities_all += [sims]
        converted_imgs_all += [imgs]
    print('')
    templates_sel_all += [tsel]
Dataset No.  0
  Contour No. 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  
Dataset No.  1
  Contour No. 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  
Dataset No.  2
  Contour No. 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  
Dataset No.  3
  Contour No. 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  
Dataset No.  4
  Contour No. 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  
Dataset No.  5
  Contour No. 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  
Dataset No.  6
  Contour No. 0  1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  
svm_inputs = copy.deepcopy(similarities_all)
svm_labels = copy.deepcopy(labs)

# Remove inadequate contour data in img1
del svm_inputs[30]
del svm_labels[30]

train_data, train_labels = get_random_sample(svm_inputs, svm_labels, [-1,0,1,2,3,5], 20)

svm = cv2.ml.SVM_create()
svm.setKernel(cv2.ml.SVM_LINEAR)
svm.setType(cv2.ml.SVM_C_SVC)
svm.setC(100)
svm.setGamma(1)
svm.train(np.array(train_data, 'float32'), cv2.ml.ROW_SAMPLE, np.array(train_labels))

result = svm.predict(np.array(svm_inputs, 'float32'))

# Dictionary containing classified count for each number
svm_stats = {k:{k2:0 for k2 in [-1, 0, 1, 2, 3, 5]} for k in [-1, 0, 1, 2, 3, 5]}
for res, lab in zip(result[1], svm_labels):
    svm_stats[lab][int(res[0])] += 1
for k,v in svm_stats.items():
    print('label {:>2}'.format(k), ': {', end='')
    for k2,v2 in v.items():
        print('{}: {:>2}, '.format(k2,v2), end='')
    print('}')

print('Misclassified data')
for sims,lab,res in zip(svm_inputs, svm_labels, result[1]):
    if lab != res[0]:
        print('{: }'.format(lab), ' -> ', '{: }'.format(res[0]), ' [',end='')
        for s in sims: print('{:.3f}, '.format(s), end='');
        print(']')

print('All data')
for sims,lab,res in zip(svm_inputs, svm_labels, result[1]):
    print('{: }'.format(lab), ' -> ', '{: }'.format(res[0]), ' [',end='')
    for s in sims: print('{:.3f}, '.format(s), end='');
    print(']')
label -1 : {-1: 77, 0:  9, 1:  0, 2:  0, 3:  0, 5:  3, }
label  0 : {-1:  1, 0: 26, 1:  0, 2:  0, 3:  0, 5:  0, }
label  1 : {-1:  0, 0:  0, 1: 78, 2:  0, 3:  0, 5:  0, }
label  2 : {-1:  0, 0:  0, 1:  0, 2: 39, 3:  0, 5:  0, }
label  3 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  2, 5:  0, }
label  5 : {-1:  0, 0:  0, 1:  0, 2:  0, 3:  0, 5: 29, }
Misclassified data
 0  ->  -1.0  [0.894, 0.863, 0.730, 0.747, 0.795, ]
-1  ->   5.0  [0.864, 0.767, 0.785, 0.835, 0.899, ]
-1  ->   0.0  [0.913, 0.838, 0.728, 0.744, 0.808, ]
-1  ->   5.0  [0.860, 0.752, 0.796, 0.831, 0.875, ]
-1  ->   0.0  [0.957, 0.873, 0.779, 0.764, 0.802, ]
-1  ->   0.0  [0.928, 0.857, 0.719, 0.718, 0.788, ]
-1  ->   0.0  [0.944, 0.820, 0.724, 0.718, 0.800, ]
-1  ->   0.0  [0.913, 0.861, 0.723, 0.745, 0.792, ]
-1  ->   0.0  [0.947, 0.878, 0.760, 0.769, 0.822, ]
-1  ->   0.0  [0.948, 0.886, 0.725, 0.750, 0.804, ]
-1  ->   0.0  [0.952, 0.839, 0.746, 0.771, 0.825, ]
-1  ->   5.0  [0.868, 0.785, 0.767, 0.819, 0.886, ]
-1  ->   0.0  [0.949, 0.846, 0.745, 0.743, 0.787, ]
All data
-1  ->  -1.0  [0.908, 0.885, 0.733, 0.787, 0.790, ]
-1  ->  -1.0  [0.892, 0.832, 0.748, 0.763, 0.826, ]
-1  ->  -1.0  [0.901, 0.817, 0.783, 0.754, 0.823, ]
-1  ->  -1.0  [0.895, 0.823, 0.798, 0.789, 0.828, ]
-1  ->  -1.0  [0.909, 0.830, 0.738, 0.773, 0.813, ]
-1  ->  -1.0  [0.922, 0.841, 0.784, 0.759, 0.813, ]
 5  ->   5.0  [0.797, 0.811, 0.769, 0.855, 0.935, ]
 0  ->  -1.0  [0.894, 0.863, 0.730, 0.747, 0.795, ]
 5  ->   5.0  [0.826, 0.803, 0.745, 0.838, 1.000, ]
 1  ->   1.0  [0.807, 0.970, 0.817, 0.797, 0.807, ]
 5  ->   5.0  [0.809, 0.805, 0.749, 0.827, 0.974, ]
 0  ->   0.0  [0.942, 0.829, 0.735, 0.733, 0.801, ]
 2  ->   2.0  [0.770, 0.804, 0.965, 0.804, 0.804, ]
 1  ->   1.0  [0.834, 0.969, 0.778, 0.778, 0.779, ]
 2  ->   2.0  [0.759, 0.807, 0.966, 0.805, 0.802, ]
-1  ->  -1.0  [0.787, 0.799, 0.778, 0.748, 0.745, ]
-1  ->   5.0  [0.864, 0.767, 0.785, 0.835, 0.899, ]
 1  ->   1.0  [0.785, 0.972, 0.812, 0.802, 0.786, ]
 1  ->   1.0  [0.713, 0.978, 0.801, 0.774, 0.784, ]
 5  ->   5.0  [0.824, 0.813, 0.750, 0.834, 0.971, ]
 0  ->   0.0  [0.966, 0.813, 0.731, 0.732, 0.800, ]
 2  ->   2.0  [0.776, 0.790, 0.966, 0.804, 0.796, ]
 5  ->   5.0  [0.820, 0.773, 0.765, 0.844, 0.932, ]
 0  ->   0.0  [0.978, 0.811, 0.741, 0.735, 0.784, ]
 2  ->   2.0  [0.764, 0.794, 1.000, 0.805, 0.796, ]
 5  ->   5.0  [0.837, 0.777, 0.749, 0.808, 0.960, ]
 0  ->   0.0  [0.953, 0.816, 0.740, 0.709, 0.808, ]
 1  ->   1.0  [0.832, 1.000, 0.802, 0.776, 0.785, ]
 2  ->   2.0  [0.769, 0.795, 0.976, 0.804, 0.807, ]
-1  ->  -1.0  [0.885, 0.808, 0.770, 0.787, 0.832, ]
 1  ->   1.0  [0.819, 0.951, 0.787, 0.777, 0.799, ]
 2  ->   2.0  [0.716, 0.813, 0.972, 0.799, 0.811, ]
 3  ->   3.0  [0.789, 0.760, 0.804, 1.000, 0.838, ]
 1  ->   1.0  [0.671, 0.983, 0.793, 0.763, 0.785, ]
 5  ->   5.0  [0.834, 0.805, 0.786, 0.833, 0.924, ]
 0  ->   0.0  [0.933, 0.859, 0.746, 0.742, 0.807, ]
-1  ->  -1.0  [0.776, 0.841, 0.681, 0.701, 0.777, ]
-1  ->  -1.0  [0.913, 0.906, 0.793, 0.786, 0.792, ]
-1  ->  -1.0  [0.899, 0.847, 0.771, 0.774, 0.837, ]
-1  ->   0.0  [0.913, 0.838, 0.728, 0.744, 0.808, ]
-1  ->  -1.0  [0.898, 0.825, 0.758, 0.817, 0.782, ]
-1  ->  -1.0  [0.917, 0.824, 0.765, 0.755, 0.818, ]
-1  ->  -1.0  [0.914, 0.834, 0.795, 0.736, 0.793, ]
 5  ->   5.0  [0.788, 0.808, 0.761, 0.840, 0.938, ]
 0  ->   0.0  [0.901, 0.856, 0.728, 0.738, 0.792, ]
 5  ->   5.0  [0.818, 0.816, 0.751, 0.780, 0.972, ]
 1  ->   1.0  [0.806, 0.979, 0.819, 0.807, 0.808, ]
 5  ->   5.0  [0.793, 0.801, 0.746, 0.790, 0.969, ]
 0  ->   0.0  [0.938, 0.840, 0.734, 0.733, 0.803, ]
 2  ->   2.0  [0.774, 0.802, 0.961, 0.805, 0.805, ]
 1  ->   1.0  [0.831, 0.978, 0.793, 0.776, 0.786, ]
 2  ->   2.0  [0.753, 0.810, 0.963, 0.808, 0.805, ]
-1  ->   5.0  [0.860, 0.752, 0.796, 0.831, 0.875, ]
-1  ->  -1.0  [0.800, 0.807, 0.771, 0.752, 0.747, ]
 1  ->   1.0  [0.787, 0.964, 0.807, 0.797, 0.789, ]
 1  ->   1.0  [0.699, 0.980, 0.806, 0.774, 0.791, ]
 5  ->   5.0  [0.816, 0.818, 0.759, 0.831, 0.958, ]
 0  ->   0.0  [0.973, 0.803, 0.741, 0.736, 0.786, ]
-1  ->  -1.0  [0.864, 0.837, 0.766, 0.780, 0.830, ]
 2  ->   2.0  [0.779, 0.794, 0.962, 0.802, 0.799, ]
 5  ->   5.0  [0.827, 0.768, 0.763, 0.853, 0.930, ]
 0  ->   0.0  [0.980, 0.800, 0.744, 0.738, 0.800, ]
 2  ->   2.0  [0.762, 0.798, 0.979, 0.797, 0.801, ]
 5  ->   5.0  [0.841, 0.777, 0.761, 0.770, 0.957, ]
 0  ->   0.0  [0.961, 0.810, 0.739, 0.710, 0.814, ]
 1  ->   1.0  [0.830, 0.979, 0.799, 0.777, 0.780, ]
 2  ->   2.0  [0.780, 0.793, 0.969, 0.808, 0.800, ]
 5  ->   5.0  [0.838, 0.772, 0.785, 0.839, 0.940, ]
 0  ->   0.0  [0.896, 0.839, 0.760, 0.709, 0.817, ]
 1  ->   1.0  [0.813, 0.956, 0.781, 0.774, 0.786, ]
-1  ->  -1.0  [0.842, 0.865, 0.780, 0.768, 0.817, ]
 2  ->   2.0  [0.702, 0.813, 0.974, 0.800, 0.807, ]
-1  ->  -1.0  [0.842, 0.875, 0.757, 0.781, 0.861, ]
-1  ->  -1.0  [0.848, 0.868, 0.777, 0.758, 0.810, ]
-1  ->  -1.0  [0.869, 0.848, 0.766, 0.769, 0.819, ]
 3  ->   3.0  [0.799, 0.767, 0.805, 0.975, 0.843, ]
-1  ->   0.0  [0.957, 0.873, 0.779, 0.764, 0.802, ]
 5  ->   5.0  [0.843, 0.801, 0.778, 0.832, 0.918, ]
 0  ->   0.0  [0.923, 0.846, 0.750, 0.744, 0.799, ]
-1  ->  -1.0  [0.883, 0.865, 0.780, 0.778, 0.829, ]
 1  ->   1.0  [0.663, 0.971, 0.789, 0.758, 0.776, ]
-1  ->  -1.0  [0.847, 0.843, 0.761, 0.764, 0.823, ]
-1  ->  -1.0  [0.879, 0.866, 0.778, 0.793, 0.812, ]
-1  ->  -1.0  [0.905, 0.855, 0.781, 0.782, 0.796, ]
-1  ->  -1.0  [0.781, 0.736, 0.658, 0.695, 0.719, ]
-1  ->  -1.0  [0.914, 0.795, 0.744, 0.772, 0.842, ]
-1  ->  -1.0  [0.906, 0.812, 0.754, 0.757, 0.847, ]
 1  ->   1.0  [0.791, 1.000, 0.804, 0.817, 0.792, ]
 1  ->   1.0  [0.764, 0.979, 0.770, 0.764, 0.787, ]
 5  ->   5.0  [0.833, 0.774, 0.779, 0.777, 1.000, ]
 0  ->   0.0  [0.967, 0.792, 0.733, 0.709, 0.816, ]
 1  ->   1.0  [0.771, 0.978, 0.775, 0.763, 0.775, ]
 1  ->   1.0  [0.763, 0.983, 0.779, 0.767, 0.784, ]
 5  ->   5.0  [0.785, 0.776, 0.779, 0.809, 0.969, ]
 0  ->   0.0  [0.928, 0.815, 0.702, 0.713, 0.805, ]
 5  ->   5.0  [0.833, 0.790, 0.788, 0.756, 0.973, ]
 0  ->   0.0  [0.982, 0.784, 0.707, 0.707, 0.794, ]
-1  ->  -1.0  [0.878, 0.857, 0.784, 0.824, 0.801, ]
-1  ->  -1.0  [0.885, 0.791, 0.726, 0.778, 0.833, ]
-1  ->  -1.0  [0.874, 0.774, 0.760, 0.767, 0.831, ]
 2  ->   2.0  [0.731, 0.780, 1.000, 0.809, 0.801, ]
-1  ->  -1.0  [0.874, 0.813, 0.739, 0.765, 0.821, ]
-1  ->  -1.0  [0.898, 0.801, 0.769, 0.772, 0.833, ]
-1  ->  -1.0  [0.887, 0.844, 0.776, 0.783, 0.836, ]
 1  ->   1.0  [0.817, 0.966, 0.790, 0.776, 0.802, ]
 1  ->   1.0  [0.772, 0.980, 0.777, 0.763, 0.794, ]
 1  ->   1.0  [0.771, 0.976, 0.775, 0.766, 0.783, ]
-1  ->  -1.0  [0.878, 0.812, 0.762, 0.770, 0.812, ]
 1  ->   1.0  [0.792, 0.962, 0.781, 0.767, 0.787, ]
-1  ->  -1.0  [0.883, 0.795, 0.745, 0.803, 0.836, ]
-1  ->  -1.0  [0.889, 0.775, 0.755, 0.765, 0.834, ]
 1  ->   1.0  [0.811, 0.972, 0.779, 0.777, 0.783, ]
 1  ->   1.0  [0.773, 0.967, 0.759, 0.758, 0.779, ]
-1  ->  -1.0  [0.883, 0.786, 0.763, 0.785, 0.843, ]
 2  ->   2.0  [0.736, 0.754, 0.957, 0.802, 0.800, ]
-1  ->  -1.0  [0.889, 0.792, 0.734, 0.788, 0.801, ]
 1  ->   1.0  [0.775, 0.970, 0.754, 0.757, 0.784, ]
-1  ->  -1.0  [0.887, 0.789, 0.750, 0.792, 0.834, ]
 1  ->   1.0  [0.806, 0.967, 0.784, 0.775, 0.784, ]
 2  ->   2.0  [0.736, 0.788, 0.962, 0.800, 0.791, ]
-1  ->  -1.0  [0.889, 0.793, 0.773, 0.794, 0.839, ]
 1  ->   1.0  [0.789, 0.972, 0.776, 0.762, 0.779, ]
-1  ->  -1.0  [0.895, 0.781, 0.736, 0.806, 0.851, ]
-1  ->  -1.0  [0.774, 0.883, 0.672, 0.712, 0.778, ]
 2  ->   2.0  [0.730, 0.781, 0.979, 0.805, 0.790, ]
 5  ->   5.0  [0.830, 0.767, 0.787, 0.769, 0.952, ]
-1  ->  -1.0  [0.746, 0.777, 0.692, 0.735, 0.803, ]
 0  ->   0.0  [0.982, 0.781, 0.700, 0.709, 0.815, ]
-1  ->  -1.0  [0.889, 0.793, 0.769, 0.806, 0.821, ]
 1  ->   1.0  [0.794, 0.954, 0.773, 0.760, 0.799, ]
 1  ->   1.0  [0.773, 0.961, 0.750, 0.766, 0.784, ]
-1  ->  -1.0  [0.701, 0.873, 0.817, 0.804, 0.840, ]
-1  ->  -1.0  [0.892, 0.852, 0.794, 0.816, 0.833, ]
-1  ->   0.0  [0.928, 0.857, 0.719, 0.718, 0.788, ]
-1  ->  -1.0  [0.879, 0.840, 0.788, 0.771, 0.805, ]
-1  ->  -1.0  [0.895, 0.838, 0.784, 0.784, 0.828, ]
-1  ->  -1.0  [0.909, 0.857, 0.775, 0.783, 0.830, ]
-1  ->   0.0  [0.944, 0.820, 0.724, 0.718, 0.800, ]
-1  ->   0.0  [0.913, 0.861, 0.723, 0.745, 0.792, ]
-1  ->  -1.0  [0.964, 0.873, 0.784, 0.768, 0.769, ]
-1  ->   0.0  [0.947, 0.878, 0.760, 0.769, 0.822, ]
-1  ->   0.0  [0.948, 0.886, 0.725, 0.750, 0.804, ]
-1  ->  -1.0  [0.909, 0.848, 0.784, 0.772, 0.822, ]
-1  ->  -1.0  [0.919, 0.855, 0.775, 0.760, 0.799, ]
-1  ->  -1.0  [0.866, 0.856, 0.766, 0.741, 0.766, ]
-1  ->   0.0  [0.952, 0.839, 0.746, 0.771, 0.825, ]
-1  ->  -1.0  [0.809, 0.802, 0.804, 0.787, 0.798, ]
-1  ->   5.0  [0.868, 0.785, 0.767, 0.819, 0.886, ]
-1  ->  -1.0  [0.824, 0.803, 0.741, 0.756, 0.831, ]
 1  ->   1.0  [0.801, 0.976, 0.812, 0.777, 0.818, ]
 1  ->   1.0  [0.795, 0.972, 0.805, 0.777, 0.813, ]
 1  ->   1.0  [0.766, 0.974, 0.810, 0.777, 0.814, ]
 1  ->   1.0  [0.764, 0.968, 0.834, 0.802, 0.819, ]
 1  ->   1.0  [0.652, 0.976, 0.821, 0.769, 0.812, ]
 1  ->   1.0  [0.800, 0.973, 0.811, 0.775, 0.814, ]
 1  ->   1.0  [0.797, 0.965, 0.800, 0.765, 0.808, ]
-1  ->  -1.0  [0.633, 0.719, 0.736, 0.742, 0.779, ]
 5  ->   5.0  [0.823, 0.771, 0.736, 0.833, 0.943, ]
 0  ->   0.0  [0.972, 0.826, 0.736, 0.709, 0.813, ]
 2  ->   2.0  [0.763, 0.809, 0.958, 0.808, 0.755, ]
 5  ->   5.0  [0.816, 0.825, 0.722, 0.834, 0.958, ]
 0  ->   0.0  [0.946, 0.832, 0.690, 0.713, 0.806, ]
 2  ->   2.0  [0.755, 0.808, 0.950, 0.799, 0.760, ]
 1  ->   1.0  [0.753, 0.972, 0.811, 0.770, 0.812, ]
 2  ->   2.0  [0.755, 0.806, 0.942, 0.800, 0.765, ]
 2  ->   2.0  [0.752, 0.808, 0.954, 0.802, 0.756, ]
-1  ->  -1.0  [0.781, 0.876, 0.754, 0.748, 0.796, ]
-1  ->  -1.0  [0.720, 0.792, 0.689, 0.747, 0.772, ]
-1  ->  -1.0  [0.693, 0.741, 0.682, 0.698, 0.801, ]
 1  ->   1.0  [0.800, 0.971, 0.804, 0.777, 0.817, ]
 1  ->   1.0  [0.837, 0.949, 0.800, 0.779, 0.810, ]
 1  ->   1.0  [0.767, 0.970, 0.805, 0.767, 0.809, ]
-1  ->  -1.0  [0.883, 0.810, 0.772, 0.755, 0.799, ]
-1  ->  -1.0  [0.884, 0.804, 0.762, 0.764, 0.806, ]
 2  ->   2.0  [0.750, 0.819, 1.000, 0.805, 0.759, ]
 0  ->   0.0  [0.878, 0.869, 0.719, 0.703, 0.794, ]
 1  ->   1.0  [0.687, 1.000, 0.825, 0.781, 0.794, ]
 5  ->   5.0  [0.776, 0.830, 0.756, 0.830, 1.000, ]
-1  ->  -1.0  [0.623, 0.889, 0.708, 0.691, 0.745, ]
 1  ->   1.0  [0.780, 0.972, 0.845, 0.810, 0.792, ]
 1  ->   1.0  [0.840, 0.966, 0.821, 0.786, 0.789, ]
 1  ->   1.0  [0.796, 0.965, 0.812, 0.776, 0.787, ]
 1  ->   1.0  [0.804, 0.970, 0.813, 0.813, 0.790, ]
 1  ->   1.0  [0.747, 0.971, 0.818, 0.764, 0.785, ]
 1  ->   1.0  [0.812, 0.970, 0.856, 0.813, 0.801, ]
 1  ->   1.0  [0.658, 0.954, 0.822, 0.773, 0.806, ]
 1  ->   1.0  [0.804, 0.968, 0.823, 0.787, 0.797, ]
 1  ->   1.0  [0.785, 0.966, 0.807, 0.792, 0.783, ]
-1  ->   0.0  [0.949, 0.846, 0.745, 0.743, 0.787, ]
 5  ->   5.0  [0.764, 0.735, 0.721, 0.744, 0.919, ]
 1  ->   1.0  [0.765, 0.962, 0.811, 0.772, 0.800, ]
 0  ->   0.0  [0.909, 0.880, 0.705, 0.707, 0.808, ]
 5  ->   5.0  [0.815, 0.817, 0.723, 0.749, 0.934, ]
 1  ->   1.0  [0.813, 0.978, 0.845, 0.813, 0.786, ]
 2  ->   2.0  [0.752, 0.810, 0.966, 0.815, 0.753, ]
 0  ->   0.0  [0.971, 0.818, 0.726, 0.724, 0.795, ]
 5  ->   5.0  [0.850, 0.816, 0.726, 0.751, 0.944, ]
 0  ->   0.0  [0.970, 0.834, 0.716, 0.711, 0.811, ]
 2  ->   2.0  [0.773, 0.806, 0.955, 0.811, 0.766, ]
 1  ->   1.0  [0.774, 0.941, 0.824, 0.780, 0.795, ]
 2  ->   2.0  [0.778, 0.815, 0.942, 0.801, 0.766, ]
 2  ->   2.0  [0.787, 0.818, 0.951, 0.808, 0.761, ]
-1  ->  -1.0  [0.732, 0.782, 0.684, 0.682, 0.773, ]
-1  ->  -1.0  [0.733, 0.808, 0.792, 0.765, 0.813, ]
 1  ->   1.0  [0.798, 0.942, 0.776, 0.803, 0.794, ]
 1  ->   1.0  [0.851, 0.935, 0.807, 0.788, 0.810, ]
 1  ->   1.0  [0.823, 0.972, 0.805, 0.772, 0.805, ]
-1  ->  -1.0  [0.854, 0.872, 0.775, 0.783, 0.834, ]
 0  ->   0.0  [0.937, 0.873, 0.720, 0.741, 0.784, ]
 1  ->   1.0  [0.645, 0.937, 0.809, 0.783, 0.802, ]
 5  ->   5.0  [0.823, 0.808, 0.732, 0.844, 0.927, ]
 2  ->   2.0  [0.723, 0.827, 0.939, 0.807, 0.777, ]
-1  ->  -1.0  [0.651, 0.726, 0.663, 0.611, 0.638, ]
 1  ->   1.0  [0.704, 0.932, 0.838, 0.796, 0.827, ]
 1  ->   1.0  [0.823, 0.961, 0.855, 0.836, 0.848, ]
 1  ->   1.0  [0.851, 0.950, 0.816, 0.788, 0.798, ]
 1  ->   1.0  [0.850, 0.962, 0.820, 0.796, 0.811, ]
 5  ->   5.0  [0.736, 0.759, 0.762, 0.776, 0.935, ]
 1  ->   1.0  [0.828, 0.943, 0.828, 0.803, 0.809, ]
 0  ->   0.0  [0.873, 0.857, 0.732, 0.698, 0.813, ]
 5  ->   5.0  [0.783, 0.803, 0.759, 0.855, 0.902, ]
 0  ->   0.0  [0.931, 0.863, 0.714, 0.703, 0.812, ]
 2  ->   2.0  [0.735, 0.820, 0.953, 0.817, 0.770, ]
 1  ->   1.0  [0.742, 0.963, 0.843, 0.806, 0.831, ]
 5  ->   5.0  [0.823, 0.773, 0.767, 0.793, 0.914, ]
 0  ->   0.0  [0.965, 0.827, 0.710, 0.723, 0.786, ]
 2  ->   2.0  [0.758, 0.810, 0.946, 0.809, 0.775, ]
 2  ->   2.0  [0.793, 0.817, 0.942, 0.820, 0.763, ]
 2  ->   2.0  [0.798, 0.816, 0.946, 0.808, 0.765, ]
 1  ->   1.0  [0.825, 0.968, 0.826, 0.781, 0.798, ]
-1  ->  -1.0  [0.714, 0.809, 0.704, 0.692, 0.795, ]
-1  ->  -1.0  [0.734, 0.806, 0.794, 0.761, 0.815, ]
 1  ->   1.0  [0.845, 0.913, 0.791, 0.775, 0.819, ]
 1  ->   1.0  [0.863, 0.933, 0.817, 0.801, 0.833, ]
 1  ->   1.0  [0.845, 0.931, 0.803, 0.776, 0.834, ]
 1  ->   1.0  [0.824, 0.937, 0.832, 0.799, 0.845, ]
 1  ->   1.0  [0.703, 0.942, 0.807, 0.763, 0.795, ]
 1  ->   1.0  [0.716, 0.951, 0.849, 0.804, 0.839, ]
 1  ->   1.0  [0.859, 0.941, 0.807, 0.784, 0.798, ]
 1  ->   1.0  [0.839, 0.952, 0.821, 0.797, 0.802, ]
-1  ->  -1.0  [0.907, 0.791, 0.804, 0.793, 0.787, ]
-1  ->  -1.0  [0.886, 0.843, 0.795, 0.798, 0.828, ]
-1  ->  -1.0  [0.951, 0.875, 0.764, 0.795, 0.797, ]
-1  ->  -1.0  [0.886, 0.885, 0.803, 0.793, 0.838, ]
-1  ->  -1.0  [0.870, 0.840, 0.767, 0.745, 0.788, ]
-1  ->  -1.0  [0.920, 0.837, 0.780, 0.767, 0.809, ]
 1  ->   1.0  [0.844, 0.975, 0.826, 0.801, 0.787, ]
 2  ->   2.0  [0.764, 0.810, 0.941, 0.808, 0.770, ]
 2  ->   2.0  [0.765, 0.809, 0.953, 0.809, 0.774, ]
 2  ->   2.0  [0.761, 0.808, 0.942, 0.807, 0.760, ]
 2  ->   2.0  [0.764, 0.817, 0.967, 0.811, 0.772, ]
 1  ->   1.0  [0.814, 0.954, 0.784, 0.757, 0.799, ]
 2  ->   2.0  [0.774, 0.813, 0.942, 0.817, 0.774, ]
 2  ->   2.0  [0.725, 0.830, 0.957, 0.807, 0.766, ]
 2  ->   2.0  [0.737, 0.836, 0.946, 0.815, 0.779, ]
 1  ->   1.0  [0.798, 0.964, 0.836, 0.800, 0.827, ]
-1  ->  -1.0  [0.699, 0.817, 0.759, 0.678, 0.692, ]
-1  ->  -1.0  [0.673, 0.725, 0.672, 0.641, 0.650, ]
-1  ->  -1.0  [0.679, 0.782, 0.787, 0.674, 0.767, ]
 2  ->   2.0  [0.762, 0.831, 0.949, 0.812, 0.771, ]
 1  ->   1.0  [0.830, 0.947, 0.808, 0.775, 0.806, ]
 2  ->   2.0  [0.763, 0.824, 0.947, 0.811, 0.722, ]
 1  ->   1.0  [0.815, 0.965, 0.842, 0.807, 0.815, ]
 1  ->   1.0  [0.828, 0.952, 0.790, 0.758, 0.805, ]

やっぱりまだ誤分類があります。
見てみると、誤分類しているのは全て非数字の輪郭になっている。
"0"に間違えられるだけなら大丈夫(点数計算に影響しないので)ですが、それ以外だと困る。

どんな画像かも一応確認。

subimgs = []
subctrs = []
for imgs in subimgs_all:
    subimgs += imgs
for ctrs in subctrs_all:
    subctrs += ctrs
del subimgs[30]
del subctrs[30]

for sims,lab,res,img,ctr in zip(svm_inputs, svm_labels, result[1], subimgs, subctrs):
    if lab != res[0]:
        print('{: }'.format(lab), ' -> ', '{: }'.format(res[0]), ' [',end='')
        for s in sims: print('{:.3f}, '.format(s), end='');
        print(']')
        img = cv2.drawContours(img, [ctr], -1, (0,255,0), 1)
        plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)),plt.xticks([]),plt.yticks([])
        plt.show()
    -1  ->   0.0  [0.908, 0.885, 0.733, 0.787, 0.790, ]

f:id:nokixa:20220319234139p:plain

    -1  ->   5.0  [0.864, 0.767, 0.785, 0.835, 0.899, ]

f:id:nokixa:20220319234141p:plain

    -1  ->   1.0  [0.913, 0.906, 0.793, 0.786, 0.792, ]

f:id:nokixa:20220319234143p:plain

    -1  ->   0.0  [0.913, 0.838, 0.728, 0.744, 0.808, ]

f:id:nokixa:20220319234146p:plain

    -1  ->   0.0  [0.914, 0.834, 0.795, 0.736, 0.793, ]

f:id:nokixa:20220319234149p:plain

    -1  ->   5.0  [0.860, 0.752, 0.796, 0.831, 0.875, ]

f:id:nokixa:20220319234152p:plain

    -1  ->   0.0  [0.957, 0.873, 0.779, 0.764, 0.802, ]

f:id:nokixa:20220319234154p:plain

    -1  ->   0.0  [0.781, 0.736, 0.658, 0.695, 0.719, ]

f:id:nokixa:20220319234156p:plain

    -1  ->   0.0  [0.928, 0.857, 0.719, 0.718, 0.788, ]

f:id:nokixa:20220319234159p:plain

    -1  ->   0.0  [0.944, 0.820, 0.724, 0.718, 0.800, ]

f:id:nokixa:20220319234202p:plain

    -1  ->   0.0  [0.913, 0.861, 0.723, 0.745, 0.792, ]

f:id:nokixa:20220319234204p:plain

    -1  ->   0.0  [0.964, 0.873, 0.784, 0.768, 0.769, ]

f:id:nokixa:20220319234207p:plain

    -1  ->   0.0  [0.947, 0.878, 0.760, 0.769, 0.822, ]

f:id:nokixa:20220319234209p:plain

    -1  ->   0.0  [0.948, 0.886, 0.725, 0.750, 0.804, ]

f:id:nokixa:20220319234211p:plain

    -1  ->   0.0  [0.919, 0.855, 0.775, 0.760, 0.799, ]

f:id:nokixa:20220319234215p:plain

    -1  ->   0.0  [0.952, 0.839, 0.746, 0.771, 0.825, ]

f:id:nokixa:20220319234217p:plain

    -1  ->   5.0  [0.868, 0.785, 0.767, 0.819, 0.886, ]

f:id:nokixa:20220319234220p:plain

    -1  ->   0.0  [0.949, 0.846, 0.745, 0.743, 0.787, ]

f:id:nokixa:20220319234223p:plain

    -1  ->   0.0  [0.951, 0.875, 0.764, 0.795, 0.797, ]

f:id:nokixa:20220319234225p:plain

"5"の文字はしょうがない感じが…
点数数字ではなくて交換期限の日時の表記の部分ですが、確かに"5"ではあるので。

もう一度縦横比を見てみたときのグラフを見ると、"1"と"5"については、実際に"1"と"5"のデータだけれども一致度が他の輪郭に埋もれているものがある。
この画像も見てみる。

initial_similarities = []
for sims in initial_similarities_all:
    initial_similarities += sims

subimgs = []
subctrs = []
for imgs in subimgs_all:
    subimgs += copy.deepcopy(imgs)
for ctrs in subctrs_all:
    subctrs += copy.deepcopy(ctrs)

data_1 = []
data_5 = []
for sims,lab,img,ctr,convimg  in zip(initial_similarities, labs, subimgs, subctrs, converted_imgs_all):
    if lab == 1:
        data_1 += [[sims[1], img, ctr, convimg[1]]]
    elif lab == 5:
        data_5 += [[sims[4], img, ctr, convimg[4]]]

data_1 = sorted(data_1, key = lambda x:x[0])
data_5 = sorted(data_5, key = lambda x:x[0])

print('------------------------------')
for d in data_1:
    if d[0] > 0.9: break;
    else:
        img = cv2.drawContours(d[1], [d[2]], -1, (0,255,0), 1)
        print(d[0])
        plt.subplot(1,2,1), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
        plt.subplot(1,2,2), plt.imshow(d[3],cmap='gray'), plt.xticks([]), plt.yticks([])
        plt.show()
print('------------------------------')
for d in data_5:
    if d[0] > 0.9: break;
    else:
        img = cv2.drawContours(d[1], [d[2]], -1, (0,255,0), 1)
        print(d[0])
        plt.subplot(1,2,1), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
        plt.subplot(1,2,2), plt.imshow(d[3],cmap='gray'), plt.xticks([]), plt.yticks([])
        plt.show()
    ------------------------------
    0.81210434

f:id:nokixa:20220319234227p:plain

    0.8434281

f:id:nokixa:20220319234230p:plain

    0.8781869

f:id:nokixa:20220319234232p:plain

    0.88676393

f:id:nokixa:20220319234235p:plain

    0.8914103

f:id:nokixa:20220319234237p:plain

    ------------------------------
    0.80079216

f:id:nokixa:20220319234239p:plain

    0.8299814

f:id:nokixa:20220319233829p:plain

    0.84041774

f:id:nokixa:20220319233831p:plain

    0.8437113

f:id:nokixa:20220319233834p:plain

    0.8467006

f:id:nokixa:20220319233836p:plain

    0.8560233

f:id:nokixa:20220319233839p:plain

    0.8829069

f:id:nokixa:20220319233842p:plain

この辺の輪郭の対応を考えたほうがいいか…

ギザギザ感がちょっと気になる…テンプレートをまっすぐに直しておくと違うかな?

一旦ここまで

ごちゃごちゃしてきたので、一旦ここで区切って、次回また新しくワークスペースを用意し直します。

OpenCVやってみる - 36. SVMで数字判定

前回からの続きです。
Jupyter notebookのデータ等の状態も引き継いでいます。
前回は、春のパン祭りシール台紙の画像から点数数字の輪郭を取得、数字テンプレートと比較して一致度を出して、固定の閾値で判定してみましたが、いまいちきっちりとは判定できませんでした。

検討してみたところ、結果的にOpenCVに含まれているSVM(Support Vector Machine)のライブラリを使うことでうまくいくようになりました。

判定方法検討

前回は抽出した点数数字の輪郭から、各数字のテンプレートと比較して、一致度が閾値を超えたもの(の中で最大の一致度を取る数字)を選びました。
各数字ごとへの一致度をそれぞれ単純に使っているだけですが、それより他の数字への一致度も考慮して判定するとよさそうな気がします。

各数字への一致度のベクトルを特徴量として、機械学習的な手法が使えるかも。

OpenCVの中にも機械学習のライブラリがあって、OpenCVチュートリアルでもいくつか紹介されているので、どれかでやってみたいと思います。

https://docs.opencv.org/4.5.2/d6/de2/tutorial_py_table_of_contents_ml.html

  • K-Nearest-Neighbor (kNN)
    特徴量空間に既知のラベルを持ったデータをマッピングし、新しいデータに対して、k番目までの最近傍データを選び、その中で多数決を行う
  • Support Vector Machine (SVM)
    特徴量空間で2つのクラスに分類する最適な平面(2次元なら直線)を選ぶ
    多クラスなら、One-to-OneかOne-to-RestのSVMを必要分用意する形になるよう
    https://www.baeldung.com/cs/svm-multiclass-classification
    OpenCVのライブラリではそこを自分で実装する必要はなく、多クラス分類をしてくれるようでした。
  • K-Means
    教師データなしでデータを分類する。
    何クラスに分けるかを指定して、特徴量空間でクラスごとの代表点を反復的な手法で選ぶ。一番近い代表点が所属するクラスとなる。

今回やっていることとしては、K-Meansは合わないかなと。

kNNは学習のコストは低いが、推論時の処理が重くなりがちとのこと。(全教師データ点に対しての特徴量距離を求める必要があるため)

SVMのほうが推論が軽そうなので、こちらを使いたいと思います。
実際のアプリケーションのイメージとしては、

  • スマホのカメラで画像データを連続的に取得
  • この画像に対してリアルタイムで点数計算を実施
  • 計算した点数と、どのように点数を認識したか、というのを画面に表示
  • 撮影条件によっておそらく正しくない結果が出る
  • ユーザが数字が正しく認識できている、と思ったら確定ボタンを押す

というものなので、入力データに対してすぐに判定できるのが望ましいです。

まずデータの確認

SVMを実施する前に、一致度データと実際の数字がどのような関係にあるのか見てみたいと思います。

といっても、一致度データは6次元ベクトルなので、完全に図示するのは難しく。なので平面グラフで見られる範囲、つまり2種類の数字の輪郭データを、それぞれへの一致度を使って見てみたいと思います。

年ごとにある程度状況が変わるかもしれないので、年ごとに分けて見てみます。

sims = similarities1 + similarities2
labels = labels1 + labels2
one_vs_zero_2019 = [(sims[i][1], sims[i][0], label) for i,label in enumerate(labels) if label==1 or label==0]
one_vs_two_2019 = [(sims[i][1], sims[i][2], label) for i,label in enumerate(labels) if label==1 or label==2]
one_vs_three_2019 = [(sims[i][1], sims[i][3], label) for i,label in enumerate(labels) if label==1 or label==3]
one_vs_five_2019 = [(sims[i][1], sims[i][4], label) for i,label in enumerate(labels) if label==1 or label==5]
one_vs_else_2019 = [(sims[i][1], sims[i][0], label) for i,label in enumerate(labels) if label==1 or label==-1]

sims = similarities3 + similarities4
labels = labels3 + labels4
one_vs_zero_2020 = [(sims[i][1], sims[i][0], label) for i,label in enumerate(labels) if label==1 or label==0]
one_vs_two_2020 = [(sims[i][1], sims[i][2], label) for i,label in enumerate(labels) if label==1 or label==2]
one_vs_five_2020 = [(sims[i][1], sims[i][4], label) for i,label in enumerate(labels) if label==1 or label==5]
one_vs_else_2020 = [(sims[i][1], sims[i][0], label) for i,label in enumerate(labels) if label==1 or label==-1]

sims = similarities5 + similarities6 + similarities7
labels = labels5 + labels6 + labels7
one_vs_zero_2021 = [(sims[i][1], sims[i][0], label) for i,label in enumerate(labels) if label==1 or label==0]
one_vs_two_2021 = [(sims[i][1], sims[i][2], label) for i,label in enumerate(labels) if label==1 or label==2]
one_vs_five_2021 = [(sims[i][1], sims[i][4], label) for i,label in enumerate(labels) if label==1 or label==5]
one_vs_else_2021 = [(sims[i][1], sims[i][0], label) for i,label in enumerate(labels) if label==1 or label==-1]

one_vs_zero = [one_vs_zero_2019, one_vs_zero_2020, one_vs_zero_2021]
one_vs_two = [one_vs_two_2019, one_vs_two_2020, one_vs_two_2021]
one_vs_three = [one_vs_three_2019]
one_vs_five = [one_vs_five_2019, one_vs_five_2020, one_vs_five_2021]
one_vs_else = [one_vs_else_2019, one_vs_else_2020, one_vs_else_2021]

years = ['2019', '2020', '2021']

plt.figure(figsize=(6.4,2.4), dpi=100)
plt.suptitle('One vs Zero', y=1.1)
for i,a in enumerate(one_vs_zero):
    x = [b[0] for b in a]
    y = [b[1] for b in a]
    c = [float(b[2]) for b in a]
    plt.subplot(1,3,1+i), plt.scatter(x,y,c=c), plt.title(years[i])
plt.show()

plt.figure(figsize=(6.4,2.4), dpi=100)
plt.suptitle('One vs Two', y=1.1)
for i,a in enumerate(one_vs_two):
    x = [b[0] for b in a]
    y = [b[1] for b in a]
    c = [float(b[2]) for b in a]
    plt.subplot(1,3,1+i), plt.scatter(x,y,c=c), plt.title(years[i])
plt.show()

plt.figure(figsize=(6.4,2.4), dpi=100)
plt.suptitle('One vs Three', y=1.1)
for i,a in enumerate(one_vs_three):
    x = [b[0] for b in a]
    y = [b[1] for b in a]
    c = [float(b[2]) for b in a]
    plt.subplot(1,3,1+i), plt.scatter(x,y,c=c), plt.title(years[i])
plt.show()

plt.figure(figsize=(6.4,2.4), dpi=100)
plt.suptitle('One vs Five', y=1.1)
for i,a in enumerate(one_vs_five):
    x = [b[0] for b in a]
    y = [b[1] for b in a]
    c = [float(b[2]) for b in a]
    plt.subplot(1,3,1+i), plt.scatter(x,y,c=c), plt.title(years[i])
plt.show()

plt.figure(figsize=(6.4,2.4), dpi=100)
plt.suptitle('One vs else', y=1.1)
for i,a in enumerate(one_vs_else):
    x = [b[0] for b in a]
    y = [b[1] for b in a]
    c = [float(b[2]) for b in a]
    plt.subplot(1,3,1+i), plt.scatter(x,y,c=c), plt.title(years[i])
plt.show()

f:id:nokixa:20220224034730p:plain

f:id:nokixa:20220224034732p:plain

f:id:nokixa:20220224034734p:plain

f:id:nokixa:20220224034737p:plain

f:id:nokixa:20220224034739p:plain

"1"とその他の数字を比較してみましたが、だいたい数字ごとに特徴量ベクトルが固まって分布しているよう。
ただ、どの数字でもない輪郭の分布とは重なってしまっています。

SVMで、他の数字への一致度を使ってうまく識別できればいいなと。

ちなみに"5"で1つ変な位置にある点は、輪郭検出時に"点"の文字まで含まれてしまったものかと考えられます。
これは学習データから除外しておかないと。

SVM試し

まずは一度SVMでどんな結果が出てくるのか、一部のデータで試してみたいと思います。
"1"のデータと"2"のデータを使ってみます。
特徴量ベクトルも、"1"と"2"への一致度のみにしてみます。
あと学習データ数は各数字で10としています。

リストのコピーでは、デフォルトでは参照渡しになるようで、何かやっているうちに元のデータを変更してしまいそう。Jupyter notebook上で行ったり来たりして色々試しているので、これだと都合が悪いので、copyモジュールのdeepcopy()を使いました。

https://murashun.jp/article/programming/python/python-list-copy-deepcopy.html

あとはrandomモジュールのsample()を使ってランダムサンプルを行いました。

https://note.nkmk.me/python-random-choice-sample-choices/

import copy
import random

all_vectors = copy.deepcopy(similarities1 + similarities2 + similarities3
                             + similarities4 + similarities5 + similarities6 + similarities7)
all_labels = copy.deepcopy(labels1 + labels2 + labels3 + labels4 + labels5 + labels6 + labels7)

# Remove inadequate contour data in img1
del all_vectors[30]
del all_labels[30]

numbers = [0, 1, 2, 3, 5]
labels = [-1] + numbers
selected_labels = [1, 2]

# Select feature vector elements to use
all_vectors = [[d for i,d in enumerate(vec) if numbers[i] in selected_labels] for vec in all_vectors]

n_train_data = 10
train_data = []
train_labels = []
n_test_data = 10
test_data = []
test_labels = []

for lab in selected_labels:
    samples = [vec for i,vec in enumerate(all_vectors) if all_labels[i]==lab]
    n = min(n_train_data, len(samples))
    train_data += random.sample(samples, n)
    train_labels += [lab] * n
    n = min(n_test_data, len(samples))
    test_data += random.sample(samples, n)
    test_labels += [lab] * n

[print(np.array(train_data[i]), ', ', train_labels[i]) for i in range(len(train_data))]
svm = cv2.ml.SVM_create()
svm.setKernel(cv2.ml.SVM_LINEAR)
svm.setType(cv2.ml.SVM_C_SVC)
svm.setC(1)
svm.setGamma(1)
svm.train(np.array(train_data, 'float32'), cv2.ml.ROW_SAMPLE, np.array(train_labels));

result = svm.predict(np.array(test_data, 'float32'))
print('SVM predict result: ')
print(result)
print('Comparison: ')
for i in range(len(test_labels)):
    print(result[1][i], ' - ', test_labels[i])
[0.8993755  0.82889575] ,  1
[0.9571013 0.8108691] ,  1
[0.93051445 0.79986185] ,  1
[0.9202969 0.8210506] ,  1
[0.94315743 0.8116947 ] ,  1
[0.9537984 0.8174602] ,  1
[0.9233544 0.7887045] ,  1
[0.93338346 0.8063096 ] ,  1
[0.90709674 0.8227937 ] ,  1
[0.94739175 0.79242164] ,  1
[0.8452033 0.8994955] ,  2
[0.82478577 0.939384  ] ,  2
[0.8292844 0.9430692] ,  2
[0.83571035 0.9575302 ] ,  2
[0.8436054 0.9455434] ,  2
[0.8402701  0.93678236] ,  2
[0.83998835 0.8954112 ] ,  2
[0.8377397  0.93956023] ,  2
[0.83261627 0.9558522 ] ,  2
[0.83023584 0.93284434] ,  2
SVM predict result: 
(0.0, array([[1.],
       [1.],
       [1.],
       [1.],
       [1.],
       [1.],
       [1.],
       [1.],
       [1.],
       [1.],
       [2.],
       [2.],
       [2.],
       [2.],
       [2.],
       [2.],
       [2.],
       [2.],
       [2.],
       [2.]], dtype=float32))
Comparison: 
[1.]  -  1
[1.]  -  1
[1.]  -  1
[1.]  -  1
[1.]  -  1
[1.]  -  1
[1.]  -  1
[1.]  -  1
[1.]  -  1
[1.]  -  1
[2.]  -  2
[2.]  -  2
[2.]  -  2
[2.]  -  2
[2.]  -  2
[2.]  -  2
[2.]  -  2
[2.]  -  2
[2.]  -  2
[2.]  -  2

ひとまずきちんと分類できているようです。
predict()の返り値は、よく分からない"0"という値と、入力データごとに推定したラベルになっています。

SVM少し掘り下げ

どんなSVMの分類器が得られたのか、以下の関数で調べられます。

  • getSupportVectors()
  • getDecisionFunction()
  • getUncompressedSupportVectors()

一応公式ドキュメントに説明がありましたが、いまいちよく分からず。
試しにやってみて、どんなものなのか確認してみたいと思います。

SVM Class Reference

print('getSupportVectors: ')
print(svm.getSupportVectors())
print('getDecisionFunction: ')
print(svm.getDecisionFunction(0))
print('getUncompressedSupportVectors: ')
print(svm.getUncompressedSupportVectors())
getSupportVectors: 
[[ 0.95603085 -1.245411  ]]
getDecisionFunction: 
(-0.23778849840164185, array([[1.]]), array([[0]], dtype=int32))
getUncompressedSupportVectors: 
[[0.8993755  0.82889575]
 [0.9571013  0.8108691 ]
 [0.93051445 0.79986185]
 [0.9202969  0.8210506 ]
 [0.94315743 0.8116947 ]
 [0.9537984  0.8174602 ]
 [0.9233544  0.7887045 ]
 [0.93338346 0.8063096 ]
 [0.90709674 0.8227937 ]
 [0.94739175 0.79242164]
 [0.8452033  0.8994955 ]
 [0.82478577 0.939384  ]
 [0.8292844  0.9430692 ]
 [0.83571035 0.9575302 ]
 [0.8436054  0.9455434 ]
 [0.8402701  0.93678236]
 [0.83998835 0.8954112 ]
 [0.8377397  0.93956023]
 [0.83261627 0.9558522 ]
 [0.83023584 0.93284434]]

getDecisionFunction()では、引数として決定関数(という呼び方でいいか?)のインデックスを与える必要がありますが、今回は2クラスへの分類なので、決定関数は1つだけになり、今回は0を与えています。

  • getUncompressedSupportVector()では、実際の推論に使われる圧縮されたサポートベクタの元となるサポートベクタが得られる、と書かれています。
    上の結果を見ると、どうも学習に使ったデータがそのまま出てきているよう。
    OpenCVチュートリアルのSVMのページを見ると、決定境界を決めるには学習用データ全てが必要というわけではなく、境界近くのデータだけあればいいよう。
    学習用データをもっと増やした場合、必要なデータだけに絞られるのか?学習用データが固まり過ぎているから全ての学習用データが出てきてしまったのか?

  • getSupportVectors()でサポートベクタが得られる、と書かれていますが、SVMの説明を見ると、サポートベクタとは境界近くのデータ点と書かれています。ただ、結果を見る限りこれはデータ点ではなく、決定境界を示す重みベクトルになっているような。getDecisionFunction()では、(retval, alpha, svidx)という形の返り値が得られますが、retvalが決定関数のバイアス項になるようです。

決定関数について、もう一度確認してみます。

w = svm.getSupportVectors()[0]
ret,alpha,svidx = svm.getDecisionFunction(0)
b = ret
for i,d in enumerate(test_data):
    val = w @ d - b
    predicted = svm.predict(np.reshape(np.array(d, 'float32'), (1,-1)))
    print('label: ', test_labels[i]
          , ', Function output: ', val
          , ', Predicted: ', predicted[1][0])
label:  1 , Function output:  0.09859859943389893 , Predicted:  [1.]
label:  1 , Function output:  0.11231565475463867 , Predicted:  [1.]
label:  1 , Function output:  0.08623391389846802 , Predicted:  [1.]
label:  1 , Function output:  0.16496700048446655 , Predicted:  [1.]
label:  1 , Function output:  0.13821208477020264 , Predicted:  [1.]
label:  1 , Function output:  0.09507524967193604 , Predicted:  [1.]
label:  1 , Function output:  0.13989132642745972 , Predicted:  [1.]
label:  1 , Function output:  0.16536468267440796 , Predicted:  [1.]
label:  1 , Function output:  0.1658780574798584 , Predicted:  [1.]
label:  1 , Function output:  0.1524949073791504 , Predicted:  [1.]
label:  2 , Function output:  -0.0744127631187439 , Predicted:  [2.]
label:  2 , Function output:  -0.14605987071990967 , Predicted:  [2.]
label:  2 , Function output:  -0.15364772081375122 , Predicted:  [2.]
label:  2 , Function output:  -0.2124718427658081 , Predicted:  [2.]
label:  2 , Function output:  -0.1557653546333313 , Predicted:  [2.]
label:  2 , Function output:  -0.12564247846603394 , Predicted:  [2.]
label:  2 , Function output:  -0.15429812669754028 , Predicted:  [2.]
label:  2 , Function output:  -0.13328897953033447 , Predicted:  [2.]
label:  2 , Function output:  -0.15880388021469116 , Predicted:  [2.]
label:  2 , Function output:  -0.12945902347564697 , Predicted:  [2.]

やっぱりgetSupportVectors()で重みベクトルが得られて、getDecisionFunction()でバイアス項が得られるようです。
入力ベクトルに重みベクトルを掛けて、バイアス項を引いてやると、判定値が得られて、この正負で判定するものと思われます。

3クラスの分類もやってみます。"1"、"2"、"5"の数字を使います。
今度は特徴量ベクトルとしては5つの数字への一致度全てを使ってみます。

all_vectors = copy.deepcopy(similarities1 + similarities2 + similarities3
                             + similarities4 + similarities5 + similarities6 + similarities7)
all_labels = copy.deepcopy(labels1 + labels2 + labels3 + labels4 + labels5 + labels6 + labels7)

# Remove inadequate contour data in img1
del all_vectors[30]
del all_labels[30]

def get_random_sample(data_in, labels_in, selected_labels, n_samples):
    data_rtn = []
    labels_rtn = []
    for lab in selected_labels:
        samples = [d for i,d in enumerate(data_in) if labels_in[i]==lab]
        n = min(n_samples, len(samples))
        data_rtn += random.sample(samples, n)
        labels_rtn += [lab] * n
    return data_rtn, labels_rtn

train_data, train_labels = get_random_sample(all_vectors, all_labels, [1,2,5], 10)
test_data, test_labels = get_random_sample(all_vectors, all_labels, [1,2,5], 10)

[print(np.array(train_data[i]), ', ', train_labels[i]) for i in range(len(train_data))]
svm = cv2.ml.SVM_create()
svm.setKernel(cv2.ml.SVM_LINEAR)
svm.setType(cv2.ml.SVM_C_SVC)
svm.setC(1)
svm.setGamma(1)
svm.train(np.array(train_data, 'float32'), cv2.ml.ROW_SAMPLE, np.array(train_labels));

result = svm.predict(np.array(test_data, 'float32'))
print('Comparison: ')

# Dictionary containing number of correct answers and number of same labels
svm_results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i,lab in enumerate(test_labels):
    if result[1][i] == lab:
        svm_results[lab][0] += 1
    svm_results[lab][1] += 1
for k,v in svm_results.items():
    print(k, ': ', v[0], ' / ', v[1])
[0.8684619  0.93794453 0.82305175 0.80380815 0.7853995 ] ,  1
[0.863555   0.9393367  0.8205488  0.7917124  0.78287023] ,  1
[0.8419271  0.9571013  0.8108691  0.77983546 0.7980355 ] ,  1
[0.83179814 0.9788645  0.7952227  0.7923971  0.7836161 ] ,  1
[0.870909  0.9359166 0.8235449 0.8104057 0.7868414] ,  1
[0.8693162  0.9437623  0.82522047 0.8118913  0.77726704] ,  1
[0.871623   0.9311241  0.81550264 0.81565213 0.79558474] ,  1
[0.8750252  0.91742384 0.78483945 0.80920565 0.8159622 ] ,  1
[0.8761335  0.9543413  0.8270361  0.80594933 0.78908885] ,  1
[0.8319478 0.894798  0.7914398 0.7971767 0.7837865] ,  1
[0.7703252  0.8415919  0.9335647  0.81709546 0.7735364 ] ,  2
[0.7810648  0.83210677 0.9345461  0.8076761  0.80540264] ,  2
[0.7528915  0.83172077 1.         0.80888104 0.80121213] ,  2
[0.779265   0.83273137 0.92147803 0.81171924 0.77139753] ,  2
[0.73760915 0.83789265 0.93405205 0.7996441  0.79096395] ,  2
[0.7610472  0.8352113  0.93603534 0.7986501  0.78484255] ,  2
[0.77799076 0.8399149  0.9400889  0.80451334 0.80509204] ,  2
[0.7862375  0.82478577 0.939384   0.8073074  0.7773469 ] ,  2
[0.7736371  0.83998835 0.8954112  0.8008472  0.76628774] ,  2
[0.7727292  0.8402701  0.93678236 0.8075204  0.78538513] ,  2
[0.8218305  0.8154667  0.7384431  0.8331704  0.92029697] ,  5
[0.8530085  0.7790276  0.72586    0.7505195  0.92765796] ,  5
[0.84800655 0.7884924  0.74963397 0.8336336  0.9487194 ] ,  5
[0.8432178  0.80467755 0.763133   0.8533464  0.93024945] ,  5
[0.855599   0.8009213  0.7863985  0.83297044 0.91814077] ,  5
[0.83844346 0.78525823 0.7594693  0.83071625 0.9581587 ] ,  5
[0.85112804 0.82673436 0.7607567  0.83964443 0.93797547] ,  5
[0.825503   0.7873881  0.73949605 0.83388895 0.9183223 ] ,  5
[0.8455853  0.7865896  0.7510412  0.77967346 0.938463  ] ,  5
[0.8390334  0.78312147 0.7487837  0.8266298  0.9461081 ] ,  5
Comparison: 
-1 :  0  /  0
0 :  0  /  0
1 :  10  /  10
2 :  10  /  10
3 :  0  /  0
5 :  10  /  10
print('getSupportVectors: ')
print(svm.getSupportVectors())
print('getDecisionFunction: ')
[print(svm.getDecisionFunction(i)) for i in range(svm.getSupportVectors().shape[0])]
print('getUncompressedSupportVectors: ')
print(svm.getUncompressedSupportVectors())
getSupportVectors: 
[[ 0.74236786  1.1534866  -1.2011049  -0.05879921  0.15569824]
 [ 0.13425273  1.5126095   0.5482252  -0.10417938 -1.4019974 ]
 [-0.60811514  0.35912287  1.7493302  -0.04538018 -1.5576956 ]]
getDecisionFunction: 
(0.6581481695175171, array([[1.]]), array([[0]], dtype=int32))
(0.5590379238128662, array([[1.]]), array([[1]], dtype=int32))
(-0.08549034595489502, array([[1.]]), array([[2]], dtype=int32))
getUncompressedSupportVectors: 
[[0.8358994  0.97376585 0.80774623 0.7944305  0.79416436]
 [0.8387221  0.94739175 0.79242164 0.7958192  0.79904383]
 [0.8761335  0.9543413  0.8270361  0.80594933 0.78908885]
 [0.8625072  0.9391679  0.8229808  0.81084424 0.80457014]
 [0.8419271  0.9571013  0.8108691  0.77983546 0.7980355 ]
 [0.8842514  0.93677527 0.82215434 0.8002985  0.8105224 ]
 [0.82942396 0.9743748  0.7959507  0.798544   0.77946776]
 [0.84554917 0.9603941  0.80359674 0.800022   0.81536746]
 [0.8716829  0.9202969  0.8210506  0.8102947  0.7935237 ]
 [0.88022274 0.92476493 0.8195491  0.8112865  0.7874757 ]
 [0.77455294 0.8228092  0.93951803 0.8037939  0.79569256]
 [0.7871597  0.8369811  0.91662616 0.81502336 0.77930087]
 [0.7602673  0.83667743 0.92887866 0.80211806 0.7976724 ]
 [0.7736371  0.83998835 0.8954112  0.8008472  0.76628774]
 [0.76749974 0.83696306 0.9345461  0.8042174  0.80406123]
 [0.79241073 0.8335312  0.9546793  0.80434114 0.8065423 ]
 [0.78609586 0.8381242  0.93519616 0.80665094 0.7609726 ]
 [0.7798084  0.83231604 0.9123746  0.8087669  0.7738839 ]
 [0.7915582  0.82690877 0.9614248  0.81141484 0.76603115]
 [0.81096166 0.83058804 0.9458052  0.80894995 0.76511675]
 [0.825503   0.7873881  0.73949605 0.83388895 0.9183223 ]
 [0.8522377  0.75933874 0.72682977 0.7764713  0.894319  ]
 [0.85112804 0.82673436 0.7607567  0.83964443 0.93797547]
 [0.8366933  0.76728594 0.7461595  0.79035497 0.94660616]
 [0.83844346 0.78525823 0.7594693  0.83071625 0.9581587 ]
 [0.8390334  0.78312147 0.7487837  0.8266298  0.9461081 ]
 [0.84800655 0.7884924  0.74963397 0.8336336  0.9487194 ]
 [0.84835213 0.8208981  0.78807247 0.7563367  0.9346628 ]
 [0.84590787 0.83696854 0.76865834 0.85480416 0.9347266 ]
 [0.84676135 0.8202787  0.78727037 0.76902366 0.95365864]]
w = svm.getSupportVectors()
dfs = [svm.getDecisionFunction(i) for i in range(3)]
b = np.array([df[0] for df in  dfs])
for i,d in enumerate(test_data):
    val = w @ d - b
    predicted = svm.predict(np.reshape(np.array(d, 'float32'), (1,-1)))
    print('label: ', test_labels[i]
          , ', Function outputs: ', val
          , ', Predicted: ', predicted[1][0])
label:  1 , Function outputs:  [0.18518424 0.25113189 0.05232799] , Predicted:  [1.]
label:  1 , Function outputs:  [0.15800738 0.19747007 0.02584302] , Predicted:  [1.]
label:  1 , Function outputs:  [ 0.19115281  0.18169904 -0.02307355] , Predicted:  [1.]
label:  1 , Function outputs:  [0.2127431  0.26644951 0.04008663] , Predicted:  [1.]
label:  1 , Function outputs:  [0.12235677 0.22019732 0.08422077] , Predicted:  [1.]
label:  1 , Function outputs:  [0.18681097 0.24292523 0.04249454] , Predicted:  [1.]
label:  1 , Function outputs:  [0.15254593 0.21868181 0.05251586] , Predicted:  [1.]
label:  1 , Function outputs:  [0.13946629 0.23918605 0.0861001 ] , Predicted:  [1.]
label:  1 , Function outputs:  [0.20872855 0.28805488 0.06570649] , Predicted:  [1.]
label:  1 , Function outputs:  [0.15355039 0.24784851 0.08067822] , Predicted:  [1.]
label:  2 , Function outputs:  [-0.17680824  0.11696589  0.28015423] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.16753972  0.11126626  0.26518631] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.16002786  0.1760323   0.32244027] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.18941259  0.18797886  0.36377156] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.1927557   0.11770147  0.29683733] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.16306555  0.10350084  0.25294673] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.19990551  0.16659194  0.35287762] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.17447698  0.11759734  0.27845466] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.16249359  0.16773802  0.31661165] , Predicted:  [2.]
label:  2 , Function outputs:  [-0.11817181  0.14852113  0.25307298] , Predicted:  [2.]
label:  5 , Function outputs:  [ 0.07100189 -0.23229852 -0.31692028] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.05144411 -0.23986143 -0.30492544] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.09993356 -0.1874423  -0.30099571] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.07407373 -0.20341051 -0.29110408] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.10216659 -0.2469826  -0.36276901] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.05273694 -0.28652012 -0.35287714] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.06737787 -0.26389527 -0.34489298] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.10729289 -0.15760526 -0.27851796] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.06865722 -0.22615045 -0.30842745] , Predicted:  [5.]
label:  5 , Function outputs:  [ 0.06620628 -0.18782762 -0.26765358] , Predicted:  [5.]

今回は3クラスの分類なので、重みベクトルおよびバイアス項は3つずつあります。
重みベクトルの分布を見ると、なんとなくどの分類をするものか分かります。

  • 1行目: "1"、"2"の分類 (正の値->"1"、負の値->"2")
  • 2行目: "1"、"5"の分類 (正の値->"1"、負の値->"5")
  • 3行目: "2"、"5"の分類 (正の値->"2"、負の値->"5")

全体データでSVM

今度は全体のデータを対象にしてSVM学習、推論をやってみたいと思います。
学習データ数は上と同じく各数字10としておきます。

all_vectors = copy.deepcopy(similarities1 + similarities2 + similarities3
                             + similarities4 + similarities5 + similarities6 + similarities7)
all_labels = copy.deepcopy(labels1 + labels2 + labels3 + labels4 + labels5 + labels6 + labels7)

# Remove inadequate contour data in img1
del all_vectors[30]
del all_labels[30]

train_data, train_labels = get_random_sample(all_vectors, all_labels, [-1,0,1,2,3,5], 10)

svm = cv2.ml.SVM_create()
svm.setKernel(cv2.ml.SVM_LINEAR)
svm.setType(cv2.ml.SVM_C_SVC)
svm.setC(1)
svm.setGamma(1)
svm.train(np.array(train_data, 'float32'), cv2.ml.ROW_SAMPLE, np.array(train_labels));

result = svm.predict(np.array(all_vectors, 'float32'))

# Dictionary containing number of correct answers and number of same labels
svm_results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i,lab in enumerate(all_labels):
    if result[1][i] == lab:
        svm_results[lab][0] += 1
    svm_results[lab][1] += 1
for k,v in svm_results.items():
    print(k, ': ', v[0], ' / ', v[1])
-1 :  60  /  89
0 :  27  /  27
1 :  78  /  78
2 :  39  /  39
3 :  0  /  2
5 :  29  /  29

だいたい正確に推論できているようです。
ただし、

  • どの数字でない輪郭で、どれかの数字として認識されてしまっているものがある
  • "3"は正しく推論されていない

という問題点があります。

1つ目の問題は、もっと前段階で候補輪郭を絞ることで対応したいなと。

2つ目の問題は、いくつか試したところ、SVMの"C"の値を変更することで対応できました。
SVMで決定境界を決めるときに、コスト関数として決定境界のマージンの大きさと誤分類の数を考慮しますが、"C"は誤分類数へのウェイトになるようで、この2項目のバランスを決定します。

Understanding SVM

今回は"3"のサンプル数が少なく、軽視されてしまったものと考えられます。
なので、"C"を思い切り大きくしてみます。

Cs = [1, 5, 10, 50, 100, 200]

for C in Cs:
    svm.setC(C)
    svm.train(np.array(train_data, 'float32'), cv2.ml.ROW_SAMPLE, np.array(train_labels));
    result = svm.predict(np.array(all_vectors, 'float32'))

    # Dictionary containing number of correct answers and number of same labels
    svm_results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
    for i,lab in enumerate(all_labels):
        if result[1][i] == lab:
            svm_results[lab][0] += 1
        svm_results[lab][1] += 1
    print('C: ', C)
    for k,v in svm_results.items():
        print(k, ': ', v[0], ' / ', v[1])
    print('')
C:  1
-1 :  60  /  89
0 :  27  /  27
1 :  78  /  78
2 :  39  /  39
3 :  0  /  2
5 :  29  /  29

C:  5
-1 :  60  /  89
0 :  27  /  27
1 :  78  /  78
2 :  39  /  39
3 :  0  /  2
5 :  29  /  29

C:  10
-1 :  58  /  89
0 :  27  /  27
1 :  78  /  78
2 :  39  /  39
3 :  0  /  2
5 :  29  /  29

C:  50
-1 :  75  /  89
0 :  27  /  27
1 :  78  /  78
2 :  39  /  39
3 :  2  /  2
5 :  29  /  29

C:  100
-1 :  76  /  89
0 :  27  /  27
1 :  78  /  78
2 :  39  /  39
3 :  2  /  2
5 :  29  /  29

C:  200
-1 :  77  /  89
0 :  27  /  27
1 :  78  /  78
2 :  39  /  39
3 :  2  /  2
5 :  29  /  29

"C"が50以上で正しく"3"を判定できています。また、数字でない輪郭の判別精度も少し上がっているようです。
今回は"C"の値としては100を選んでおきたいと思います。

ここまで

SVMを使って数字判定ができるようになりました。
今回はここで一旦区切りたいと思います。

次回は残っている項目の検討になります。

  • ICP前の初期変換行列での判定
  • ICP収束条件

OpenCVやってみる - 35. 複数データで点数判定実施

春のパン祭りが始まりましたが、まだ点数自動計算ができず…
むずい…

方式変更

前まで検討していた点数判定方法を実際のデータで試したところ、うまくいかず。
特に"5"の文字の判定が困難でした。

経過は省きますが、以下の点を変更しました。

  • "0"以外での比較方法の修正
    "0"以外の文字での比較ですが、前回まではテンプレート→対象輪郭の変換行列を求めて、変換後テンプレートと対象輪郭の塗りつぶし画像を比較していました。
    ただ、変換行列を求めた後に逆変換行列を求め、テンプレートの塗りつぶし画像(1回生成すればいい)と逆変換をかけた対象輪郭の塗りつぶしを比較する、というほうが処理が軽くなるかと考えました。
  • 輪郭検出時、cv2.findContours()cv2.CHAIN_APPROX_NONEによる検出を実施
    今まではcv2.CHAIN_APPROX_SIMPLEを使っていましたが、これだと輪郭上の一部の点しか取得できません。
    下記の理由で、輪郭のカーブ上の点も得たかったので、変更しました。
  • テンプレート輪郭上の点は、カーブ上のものも選ぶ
    判定に失敗したデータを見ると、変換行列推定のICPアルゴリズムの結果、テンプレートの点と比較対象画像の点の対応が期待通りになっていませんでした。
    前回までで選んだテンプレート上の点は、文字の角の点にしていましたが、配置に偏りがあったのが問題かと。
    なので、テンプレートの輪郭上の点を全体的に取るように変更しました。
  • 初期変換行列の評価方法
    最近傍点との距離の総和を使っていましたが、輪郭点が増えるので処理が重くなると考え、テンプレートマッチングによる評価に変更しました。
  • 初期変換行列の時点でのふるい分け
    ICPで使うテンプレート輪郭点が増えたので、処理が重くなっています。
    初期変換行列の時点である程度判断してよさそうかと思われるので、これにより処理を軽減します。 閾値はデータを見て決める必要があるかと思われます。
    • 初期変換行列で十分な一致度が得られればそれで終了
    • 一致度が十分な値に達しなければ、それでも終了

下準備

今まで通りのライブラリインポート、画像読み込みを行います。

import cv2
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import math

img1 = cv2.imread('harupan_190428_1.jpg')
img2 = cv2.imread('harupan_190428_2.jpg')
img3 = cv2.imread('harupan_200317_1.jpg')
img4 = cv2.imread('harupan_210227_2.jpg')
img5 = cv2.imread('harupan_210402_1.jpg')
img6 = cv2.imread('harupan_210402_2.jpg')
img7 = cv2.imread('harupan_210414_1.jpg')
plt.figure(figsize=(12.8,9.6), dpi=100)
plt.subplot(2,4,1), plt.imshow(cv2.cvtColor(img1,cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.subplot(2,4,2), plt.imshow(cv2.cvtColor(img2,cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.subplot(2,4,3), plt.imshow(cv2.cvtColor(img3,cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.subplot(2,4,4), plt.imshow(cv2.cvtColor(img4,cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.subplot(2,4,5), plt.imshow(cv2.cvtColor(img5,cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.subplot(2,4,6), plt.imshow(cv2.cvtColor(img6,cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.subplot(2,4,7), plt.imshow(cv2.cvtColor(img7,cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220223004508p:plain

輪郭検出処理

変更した輪郭検出処理です。
若干の変更もあります。

  • 関数名
  • リサイズ画像を毎回返す(デバッグモード指定の引数は廃止)
  • 解像度閾値(リサイズするかどうか)をオプション引数にした

処理内容を改めてまとめておきます。

  • 解像度があまり大きくならないように調整する (画像の縦、横いずれかが閾値を超えていたらその閾値になるよう、縦横比を維持して縮小)
  • HSVフォーマットに変換
  • Hue(色相)、Saturation(彩度)で2値化 (Hueは、赤周辺の範囲で2値化するため、値の範囲を回転させる)
  • 輪郭検出を実施、階層情報も全て取得するようにする
  • 第2レベルの輪郭を取り出す
  • 所定の面積以下の輪郭をフィルタする、この結果の輪郭を返す
  • リサイズした画像データも返す
def detect_candidate_contours(image, res_th=800):
    h, w, chs = image.shape
    if h > res_th or w > res_th:
        k = float(res_th)/h if w > h else float(res_th)/w
    else:
        k = 1.0
    img = cv2.resize(image, None, fx=k, fy=k, interpolation=cv2.INTER_AREA)
    if __debug__:
        print('Resized to ', img.shape)
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    # Convert hue value (rotation, mask by saturation)
    hsv[:,:,0] = np.where(hsv[:,:,0] < 50, hsv[:,:,0]+180, hsv[:,:,0])
    hsv[:,:,0] = np.where(hsv[:,:,1] < 100, 0, hsv[:,:,0])
    # Thresholding with cv2.inRange()
    th_hue = cv2.inRange(hsv[:,:,0], 135, 190)
    # Retrieve all points on the contours (cv2.CHAIN_APPROX_NONE)
    contours, hierarchy = cv2.findContours(th_hue, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)
    indices0 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] == -1]
    indices1 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] in indices0]
    if __debug__:
        print('Number of contours: ', len(contours))
        print('Number of indices0: ', len(indices0), 'indices1: ', len(indices1))
    contours1 = [contours[i] for i in indices1]
    contours1_filtered = [ctr for ctr in contours1 if cv2.contourArea(ctr) > float(res_th)*float(res_th)/4000]
    return contours1_filtered, img

文字判定処理

いくつかの処理の組み合わせになります。

補助処理

輪郭周辺の小画像を作成します。
原点をこの小画像に合わせた輪郭データも返します。

def create_contour_area_image(img, ctrs, idx):
    x,y,w,h = cv2.boundingRect(ctrs[idx])
    rtn_img = img[y:y+h,x:x+w,:].copy()
    rtn_ctr = ctrs[idx].copy()
    origin = np.array([x,y])
    for c in rtn_ctr:
        c[0,:] -= origin
    return rtn_img, rtn_ctr

輪郭の塗りつぶし画像を作成します。

# ctr: Should be output of create_contour_area_image() (Origin of points is the origin of bounding box)
# img_shape: Optional, tuple of (image_height, image_width), if omitted, calculated from ctr
def create_solid_contour(ctr, img_shape=(int(0),int(0))):
    if img_shape == (int(0),int(0)):
        _,_,w,h = cv2.boundingRect(ctr)
    else:
        h,w = img_shape
    img = np.zeros((h,w), 'uint8')
    img = cv2.drawContours(img, [ctr], -1, 255, -1)
    return img

変換行列計算

2つの輪郭データ間の最適な変換を求める処理です。
以下を含みます。

  • 最近傍点探索
  • 外接矩形(回転あり)に基づいた変換行列の計算
  • 2次元アフィン変換行列計算(3組以上の対応点を使用、最小二乗誤差)
  • ICPアルゴリズム
# pts: list of 2D points, or ndarray of shape (n,2)
# query: 2D point to find nearest neighbor
def find_nearest_neighbor(pts, query):
    min_distance = float('inf')
    min_idx = 0
    for i, p in enumerate(pts):
        d = np.linalg.norm(query - p)
        if(d < min_distance):
            min_distance = d
            min_idx = i
    return min_idx, min_distance

def get_initial_transform(src_ctr, src_img, dst_ctr, dst_img):
    src_box = cv2.boxPoints(cv2.minAreaRect(src_ctr))
    dst_box = cv2.boxPoints(cv2.minAreaRect(dst_ctr))
    # Rotated patterns are created when starting index is slided
    dst_box = np.vstack([dst_box, dst_box])
    
    src_pts = [p for p in src_ctr[:,0,:]]
    dst_pts = [p for p in dst_ctr[:,0,:]]
    max_similarity = 0.0
    for i in range(4):
        M = cv2.getAffineTransform(src_box[0:3], dst_box[i:i+3])
        converted_img = cv2.warpAffine(src_img, M, dsize=(dst_img.shape[1], dst_img.shape[0]), flags=cv2.INTER_NEAREST)
        similarity = cv2.matchTemplate(converted_img, dst_img, cv2.TM_CCORR_NORMED)
        if similarity[0,0] > max_similarity:
            M_rtn = M
            max_similarity = similarity[0,0]
    return M_rtn, max_similarity

# src, dst: ndarray, shape is (n,2) (n: number of points)
def estimate_affine_2d(src, dst):
    n = min(src.shape[0], dst.shape[0])
    x = dst[0:n].flatten()
    A = np.zeros((2*n,6))
    for i in range(n):
        A[i*2,0] = src[i,0]
        A[i*2,1] = src[i,1]
        A[i*2,2] = 1
        A[i*2+1,3] = src[i,0]
        A[i*2+1,4] = src[i,1]
        A[i*2+1,5] = 1
    M = np.linalg.inv(A.T @ A) @ A.T @ x
    return M.reshape([2,3])

# Find optimum affine matrix using ICP algorithm
# src_pts: ndarray, shape is (n_s,2) (n_s: number of points)
# dst_pts: ndarray, shape is (n_d,2) (n_d: number of points, n_d should be larger or equal to n_s)
# initial_matrix: ndarray, shape is (2,3)
def icp(src_pts, dst_pts, max_iter=100, initial_matrix=np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])):
    default_affine_matrix = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
    if dst_pts.shape[0] < src_pts.shape[0]:
        print("icp: Insufficient destination points")
        return default_affine_matrix, False
    if initial_matrix.shape != (2,3):
        print("icp: Illegal shape of initial_matrix")
        return default_affine_matrix, False
    M = initial_matrix
    # Store indices of the nearest neighbor point of dst_pts to the converted point of src_pts
    nn_idx = []
    for i in range(max_iter):
        nn_idx_tmp = []
        dst_pts_list = [p for p in dst_pts]
        idx_list = list(range(0,dst_pts.shape[0]))
        for p in src_pts:
            p2 = M @ np.array([p[0], p[1], 1])
            idx, d = find_nearest_neighbor(dst_pts_list, p2)
            nn_idx_tmp += [idx_list[idx]]
            del dst_pts_list[idx]
            del idx_list[idx]
        if __debug__:
            print("icp: nn_idx: ", nn_idx_tmp)
        if nn_idx != [] and nn_idx == nn_idx_tmp:
            if __debug__:
                print("icp: converged in ", i, " iteration(s)")
            break
        dst_pts2 = np.zeros_like(src_pts)
        for j,idx in enumerate(nn_idx_tmp):
            dst_pts2[j,:] = dst_pts[idx,:]
        M = estimate_affine_2d(src_pts, dst_pts2)
        nn_idx = nn_idx_tmp
        if i == max_iter -1:
            print("icp: Not converged")
            return M, False
    return M, True

# src_selected_pt_idx: Indices of points in src_ctr to find matching points
# sim_th: Threshold of similarity
def get_optimum_transform(src_ctr, src_selected_pt_idx, dst_ctr, sim_th):
    src_img = create_solid_contour(src_ctr)
    dst_img = create_solid_contour(dst_ctr)
    src_pts = np.array([src_ctr[idx,0,:] for idx in src_selected_pt_idx])
    dst_pts = np.array([p for p in dst_ctr[:,0,:]])
    M_init, sim_init = get_initial_transform(src_ctr, src_img, dst_ctr, dst_img)
    if sim_init > sim_th:
        print('get_optimum_transform: ICP skipped')
        return M_init, True
    else:
        return icp(src_pts, dst_pts, initial_matrix=M_init)

一致度計算 ("0"以外の判定用)

ここで、上で検討した比較方法の変更を入れます。
cv2.invertAffineTransform()逆行列を計算してくれます。

OpenCV: Geometric Image Transformations

輪郭点の座標にアフィン変換を実施、その後に変換した輪郭点で塗りつぶし画像を作る、という形で実装しました。
先に輪郭の塗りつぶし画像を作ってからアフィン変換する、という方式も考えられますが、こちらのほうが少し処理が軽くなるかなと。

def get_contours_similarity(ctr, ctr_tmp, solid_tmp, selected_pt_idx_tmp, sim_th):
    M, result = get_optimum_transform(ctr_tmp, selected_pt_idx_tmp, ctr, sim_th)
    Minv = cv2.invertAffineTransform(M)
    converted_ctr = np.zeros_like(ctr)
    for i in range(ctr.shape[0]):
        converted_ctr[i,0,:] = (Minv[:,0:2] @ ctr[i,0,:]) + Minv[:,2]
    ctr_img = create_solid_contour(converted_ctr, img_shape=solid_tmp.shape)
    val = cv2.matchTemplate(solid_tmp, ctr_img, cv2.TM_CCORR_NORMED)
    return val[0,0], ctr_img

テンプレート画像生成 ("0"の判定用)

前回やったものと比べて、実装を変えています。
あとで一度確認してみます。

ここでは、create_contour_area_image()関数を実施して、Bounding Boxの原点を原点とした輪郭点データを扱うこととします。

  • テンプレートの輪郭を楕円近似
  • 楕円の角度をまっすぐ(0deg)にする変換行列を求める、回転中心は楕円の中心
  • 変換行列には並進の成分も含まれるが、近似楕円の外接矩形の原点が原点に来るように変更する
  • テンプレートの塗りつぶし画像を用意しておく
  • 変換後の画像サイズを近似楕円の外接矩形の縦横サイズとして、変換を実施

この変換は、比較対象にも実施します。

# ctr: Should be output of create_contour_area_image() (Origin of points is the origin of bounding box)
# img_shape: Optional, tuple of (image_height, image_width), determined from fitted ellipse if omitted
def create_upright_solid_contour(ctr,img_shape=(int(0),int(0))):
    (cx,cy),(w,h),angle = cv2.fitEllipse(ctr)
    if img_shape == (int(0),int(0)):
        # Default: same as fitted ellipse
        img_shape = (math.ceil(w), math.ceil(h))
    ctr_img = create_solid_contour(ctr)
    Mrot = cv2.getRotationMatrix2D((cx,cy), angle, 1)
    Mrot[0,2] -= cx - w/2
    Mrot[1,2] -= cy - h/2
    rotated_ctr_img = cv2.warpAffine(ctr_img, Mrot, dsize=img_shape, flags=cv2.INTER_NEAREST)
    return rotated_ctr_img

一致度計算 ("0"の判定用)

比較対象の輪郭をテンプレート同様にまっすぐに回転させた後、テンプレートの縦横比と同じになるようにリサイズして、テンプレートマッチングを実施します。

def get_contours_similarity_zero(solid_tmp, ctr):
    img = create_upright_solid_contour(ctr)
    img = cv2.resize(img, dsize=(solid_tmp.shape[1], solid_tmp.shape[0]), interpolation=cv2.INTER_NEAREST)
    val = cv2.matchTemplate(img, solid_tmp, cv2.TM_CCORR_NORMED)
    return val[0,0], img

最後に判定

比較対象の輪郭を各数字のテンプレートと比較します。
一致度が閾値以上のものが複数あれば、最も一致度の高いものを選びます。
どれも閾値を超えなければ、どの数字でもない、ということになります。

閾値は、全数字共通で、0.92にしておきたいと思います。が、要検討だなー…

# ctr: Single contour to compare
# solid_zero: template image of "Zero"
# solid_other: list of template images of other numbers (1,2,3,5), if template does not exist, fill corresponding element with ndarray with shape (1)
# ctr_other: list of contours of other numbers (1,2,3,5), if template does not exist, fill corresponding element with None
# pts_idx_other: list of list of edge points of other numbers (1,2,3,5), if template does not exist, fill corresponding element with None
# debug_number: Optional, if specified, comparing image for the number is returned
# return: determined number (0,1,2,3,5), -1 if none corresponds
def determine_number(ctr, solid_zero, solid_other, ctr_other, pts_idx_other, debug_number=-1):
    # Threshold value of similarity, should be adjusted
    val_th = 0.92
    sim, img = get_contours_similarity_zero(solid_zero, ctr)
    max_val = sim
    max_number = 0
    # For evaluation
    similarities = [sim]
    if debug_number==0:
        dbg_img = img.copy()
    
    numbers = [1,2,3,5]
    for i in range(4):
        if solid_other[i].shape == (1,):
            similarities += [0.0]
            if debug_number == numbers[i]:
                dbg_img = np.zeros((1,1), 'uint8')
        else:
            sim, img = get_contours_similarity(ctr, ctr_other[i], solid_other[i], pts_idx_other[i], val_th)
            similarities += [sim]
            if sim > max_val:
                max_val = sim
                max_number = numbers[i]
            if debug_number == numbers[i]:
                dbg_img = img.copy()
    rtn_number = -1 if max_val < val_th else max_number
    if debug_number != -1:
        return rtn_number, similarities, dbg_img
    else:
        return rtn_number

その他処理

以下の関数は、文字テンプレートの選択等で使用します。

def draw_contour(img, ctrs, idx):
    img_with_ctr = cv2.drawContours(img.copy(), [ctrs[idx]], -1, (0,255,0), 2)
    plt.figure(figsize=(6.4,4.8), dpi=100)
    plt.imshow(cv2.cvtColor(img_with_ctr, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.show()
def draw_contour_point(img, ctr, idx):
    img_with_pt = cv2.drawMarker(img.copy(), ctr[idx,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.imshow(cv2.cvtColor(img_with_pt, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.show()

まず輪郭検出

ここから実際の画像で作業をやっていきます。
まずは、上で出した7枚の画像全てで輪郭検出を行います。
テンプレートを選ぶにしろ、正解ラベル付けしていくにしろ必要になるので。

imgs = [img1, img2, img3, img4, img5, img6, img7]
resized_imgs = []
ctrs_all = []
for img in imgs:
    ctrs, im = detect_candidate_contours(img)
    resized_imgs += [im]
    ctrs_all += [ctrs]
Resized to  (1067, 800, 3)
Number of contours:  2514
Number of indices0:  1448 indices1:  875
Resized to  (1067, 800, 3)
Number of contours:  2269
Number of indices0:  1265 indices1:  818
Resized to  (1067, 800, 3)
Number of contours:  2062
Number of indices0:  1154 indices1:  718
Resized to  (1067, 800, 3)
Number of contours:  1204
Number of indices0:  450 indices1:  664
Resized to  (1067, 800, 3)
Number of contours:  1613
Number of indices0:  698 indices1:  795
Resized to  (1067, 800, 3)
Number of contours:  1242
Number of indices0:  373 indices1:  777
Resized to  (1067, 800, 3)
Number of contours:  1258
Number of indices0:  555 indices1:  595

テンプレート選択

既に実施していますが、上の関数を使った形で1つだけやり直してみたいと思います。
interactでは複数の引数も扱うことができるようです。

上のdraw_contour()そのままでやろうとしましたが、だめでした。
引数がinteractで指定できるもの(bool値、整数、浮動小数点値、など)でないといけないようです。

こちらは公式ドキュメント

Using Interact

from ipywidgets import interact, fixed

def draw_contour_interact(i_img, idx):
    draw_contour(resized_imgs[i_img], ctrs_all[i_img], idx)

interact(draw_contour_interact, i_img=fixed(4), idx=(0, len(ctrs_all[4])-1));

f:id:nokixa:20220223004718p:plain

前にテンプレートとして選んだ輪郭を再度示します。
また、比較用の2値画像も作成します。

1つ目の画像の"5"の文字のテンプレートだけ、前に選んだものがあまり良くなかったので、別のものを選んでいます。
また、3つ目の画像、5つ目の画像では"3"の文字がなかったので、1つ目の画像のテンプレートを使うことにします。(一度0埋めもやりましたが、都合が悪かったので。)

ctrs1_idx_zero = 26
ctrs1_idx_one = 27
ctrs1_idx_two = 24
ctrs1_idx_three = 33
# ctrs1_idx_five = 35
ctrs1_idx_five = 8
ctrs1_idx_numbers = [ctrs1_idx_zero, ctrs1_idx_one, ctrs1_idx_two, ctrs1_idx_three, ctrs1_idx_five]

subimgs1 = []
subctrs1 = []
binimgs1 = []
for i,idx in enumerate(ctrs1_idx_numbers):
    img, ctrs = create_contour_area_image(resized_imgs[0], ctrs_all[0], idx)
    if i == 0:
        binimg = create_upright_solid_contour(ctrs)
    else:
        binimg = create_solid_contour(ctrs)
    subimgs1 += [img.copy()]
    subctrs1 += [ctrs.copy()]
    binimgs1 += [binimg.copy()]
    ctr_img = cv2.drawContours(img, [ctrs], -1, (0,255,0), 2)
    plt.subplot(2,5,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(2,5,6+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220223004513p:plain

ctrs3_idx_zero = 7
ctrs3_idx_one = 4
ctrs3_idx_two = 17
ctrs3_idx_five = 6
ctrs3_idx_numbers = [ctrs3_idx_zero, ctrs3_idx_one, ctrs3_idx_two, ctrs3_idx_five]

subimgs3 = []
subctrs3 = []
binimgs3 = []
for i,idx in enumerate(ctrs3_idx_numbers):
    img, ctrs = create_contour_area_image(resized_imgs[2], ctrs_all[2], idx)
    if i == 0:
        binimg = create_upright_solid_contour(ctrs)
    else:
        binimg = create_solid_contour(ctrs)
    subimgs3 += [img.copy()]
    subctrs3 += [ctrs.copy()]
    binimgs3 += [binimg.copy()]
    ctr_img = cv2.drawContours(img, [ctrs], -1, (0,255,0), 2)
    plt.subplot(2,4,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(2,4,5+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()

subimgs3.insert(3, subimgs1[3])
subctrs3.insert(3, subctrs1[3])
binimgs3.insert(3, binimgs1[3])

f:id:nokixa:20220223004515p:plain

ctrs5_idx_zero = 3
ctrs5_idx_one = 4
ctrs5_idx_two = 2
ctrs5_idx_five = 5
ctrs5_idx_numbers = [ctrs5_idx_zero, ctrs5_idx_one, ctrs5_idx_two, ctrs5_idx_five]

subimgs5 = []
subctrs5 = []
binimgs5 = []
for i,idx in enumerate(ctrs5_idx_numbers):
    img, ctrs = create_contour_area_image(resized_imgs[4], ctrs_all[4], idx)
    if i == 0:
        binimg = create_upright_solid_contour(ctrs)
    else:
        binimg = create_solid_contour(ctrs)
    subimgs5 += [img.copy()]
    subctrs5 += [ctrs.copy()]
    binimgs5 += [binimg.copy()]
    ctr_img = cv2.drawContours(img, [ctrs], -1, (0,255,0), 2)
    plt.subplot(2,4,1+i), plt.imshow(cv2.cvtColor(ctr_img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(2,4,5+i), plt.imshow(binimg,cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()

subimgs5.insert(3, subimgs1[3])
subctrs5.insert(3, subctrs1[3])
binimgs5.insert(3, binimgs1[3])

f:id:nokixa:20220223004518p:plain

テンプレート輪郭点の選択

文字テンプレートの輪郭点から、ICPで使うものを選択します。
輪郭点の数を確認、適当に間引きます。

[print(subctrs1[i].shape[0], ', ') for i in range(len(ctrs1_idx_numbers))];
133 , 
141 , 
212 , 
214 , 
131 , 
[print(subctrs3[i].shape[0], ', ') for i in range(len(ctrs3_idx_numbers))];
139 , 
149 , 
214 , 
214 , 
[print(subctrs5[i].shape[0], ', ') for i in range(len(ctrs5_idx_numbers))];
100 , 
88 , 
159 , 
214 , 

間引きは1/5ぐらいでどうかな。
リスト内包表記が便利です。1行で1/5の間引きを記述できます。

Pythonのリスト(配列)の特定の要素を抽出、置換、変換

subctrs1_selected_pts_one = [i for i in range(subctrs1[1].shape[0]) if i % 5 == 0]
subctrs1_selected_pts_two = [i for i in range(subctrs1[2].shape[0]) if i % 5 == 0]
subctrs1_selected_pts_three = [i for i in range(subctrs1[3].shape[0]) if i % 5 == 0]
subctrs1_selected_pts_five = [i for i in range(subctrs1[4].shape[0]) if i % 5 == 0]

subctrs1_selected_pts = [subctrs1_selected_pts_one, subctrs1_selected_pts_two, subctrs1_selected_pts_three, subctrs1_selected_pts_five]
for i in range(4):
    img = subimgs1[i+1].copy()
    for p in subctrs1_selected_pts[i]:
        img = cv2.drawMarker(img, subctrs1[i+1][p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(1,4,1+i), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220223004520p:plain

subctrs3_selected_pts_one = [i for i in range(subctrs3[1].shape[0]) if i % 5 == 0]
subctrs3_selected_pts_two = [i for i in range(subctrs3[2].shape[0]) if i % 5 == 0]
subctrs3_selected_pts_three = [i for i in range(subctrs3[3].shape[0]) if i % 5 == 0]
subctrs3_selected_pts_five = [i for i in range(subctrs3[4].shape[0]) if i % 5 == 0]

subctrs3_selected_pts = [subctrs3_selected_pts_one, subctrs3_selected_pts_two, subctrs3_selected_pts_three, subctrs3_selected_pts_five]
for i in range(4):
    if subimgs3[i+1].shape == (1,):
        continue
    img = subimgs3[i+1].copy()
    for p in subctrs3_selected_pts[i]:
        img = cv2.drawMarker(img, subctrs3[i+1][p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(1,4,1+i), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220223004523p:plain

subctrs5_selected_pts_one = [i for i in range(subctrs5[1].shape[0]) if i % 5 == 0]
subctrs5_selected_pts_two = [i for i in range(subctrs5[2].shape[0]) if i % 5 == 0]
subctrs5_selected_pts_three = [i for i in range(subctrs5[3].shape[0]) if i % 5 == 0]
subctrs5_selected_pts_five = [i for i in range(subctrs5[4].shape[0]) if i % 5 == 0]

subctrs5_selected_pts = [subctrs5_selected_pts_one, subctrs5_selected_pts_two, subctrs5_selected_pts_three, subctrs5_selected_pts_five]
for i in range(4):
    if subimgs5[i+1].shape == (1,):
        continue
    img = subimgs5[i+1].copy()
    for p in subctrs5_selected_pts[i]:
        img = cv2.drawMarker(img, subctrs5[i+1][p,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    plt.subplot(1,4,1+i), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220223004525p:plain

正解ラベル付け

全画像について、検出した各輪郭にそれぞれラベルを付けていきます。
ラベルは、0,1,2,3,5,どれでもなければ-1とします。

interact()も使用しますが、それでも面倒な作業…

interact(draw_contour_interact, i_img=fixed(0), idx=(0, len(ctrs_all[0])-1));

f:id:nokixa:20220223004722p:plain

labels1 = [-1,-1,-1,-1,-1
           ,-1,5,0,5,1
           ,5,0,2,1,2
           ,-1,-1,1,1,5
           ,0,2,5,0,2
           ,5,0,1,2,-1
           ,5,1,2,3,1
           ,5,0,-1]
interact(draw_contour_interact, i_img=fixed(1), idx=(0, len(ctrs_all[1])-1));

f:id:nokixa:20220223004725p:plain

labels2 = [-1,-1,-1,-1,-1
           ,-1,5,0,5,1
           ,5,0,2,1,2
           ,-1,-1,1,1,5
           ,0,-1,2,5,0
           ,2,5,0,1,2
           ,5,0,1,-1,2
           ,-1,-1,-1,3,-1
           ,5,0,-1,1,-1
           ,-1]
interact(draw_contour_interact, i_img=fixed(2), idx=(0, len(ctrs_all[2])-1));

f:id:nokixa:20220223004728p:plain

labels3 = [-1,-1,-1,-1,1
           ,1,5,0,1,1
           ,5,0,5,0,-1
           ,-1,-1,2,-1,-1
           ,-1,1,1,1,-1
           ,1,-1,-1,1,1
           ,-1,2,-1,1,-1
           ,1,2,-1,1,-1
           ,-1,2,5,-1,0
           ,-1,1,1]
interact(draw_contour_interact, i_img=fixed(3), idx=(0, len(ctrs_all[3])-1));

f:id:nokixa:20220223004731p:plain

labels4 = [-1,-1,-1,-1,-1
           ,-1,-1,-1,-1,-1
           ,-1,-1,-1,-1,-1
           ,-1,-1,-1,1,1
           ,1,1,1,1,1
           ,-1,5,0,2,5
           ,0,2,1,2,2
           ,-1,-1,-1,1,1
           ,1]
interact(draw_contour_interact, i_img=fixed(4), idx=(0, len(ctrs_all[4])-1));

f:id:nokixa:20220223004734p:plain

labels5 = [-1,-1,2,0,1
           ,5,-1,1,1,1
           ,1,1,1,1,1
           ,1,-1,5,1,0
           ,5,1,2,0,5
           ,0,2,1,2,2
           ,-1,-1,1,1,1
           ]
interact(draw_contour_interact, i_img=fixed(5), idx=(0, len(ctrs_all[5])-1));

f:id:nokixa:20220223004738p:plain

labels6 = [-1,0,1,5,2
                ,-1,1,1,1,1
                ,5,1,0,5,0
                ,2,1,5,0,2
                ,2,2,1,-1,-1
                ,1,1,1,1,1
                ,1,1,1]
interact(draw_contour_interact, i_img=fixed(6), idx=(0, len(ctrs_all[6])-1));

f:id:nokixa:20220223004741p:plain

labels7 = [-1,-1,-1,-1,-1
           ,-1,1,2,2,2
           ,2,1,2,2,2
           ,1,-1,-1,-1,2
           ,1,2,1,1]

各輪郭で点数判定の実施

点数の判定をやっていきます。
ここでは確認のため、いくつか判定結果以外の情報も見ておきます。

まずは1つ目の画像から。

subimgs = []
subctrs = []
det_numbers1 = []
similarities1 = []
dbg_imgs = []
for i in range(len(ctrs_all[0])):
    subimg, subctr = create_contour_area_image(resized_imgs[0], ctrs_all[0], i)
    debug_number= 0 if labels1[i] == -1 else labels1[i]
    det_number, sim, img = determine_number(subctr, binimgs1[0], binimgs1[1:5], subctrs1[1:5], subctrs1_selected_pts, debug_number=debug_number)
    subimgs += [subimg]
    subctrs += [subctr]
    det_numbers1 += [det_number]
    similarities1 += [sim]
    dbg_imgs += [img]
icp: nn_idx:  [81, 91, 102, 112, 123, 132, 141, 149, 158, 166, 175, 183, 192, 201, 209, 222, 233, 243, 248, 249, 22, 32, 39, 48, 56, 61, 62, 71, 79]
icp: nn_idx:  [80, 86, 99, 110, 123, 131, 140, 148, 157, 166, 175, 184, 193, 202, 210, 224, 235, 247, 252, 11, 20, 32, 38, 47, 56, 61, 62, 70, 79]
icp: nn_idx:  [81, 85, 98, 110, 123, 131, 140, 148, 157, 166, 175, 184, 193, 201, 217, 227, 240, 252, 255, 12, 21, 32, 38, 47, 56, 61, 62, 70, 79]
icp: nn_idx:  [81, 86, 98, 111, 124, 131, 140, 149, 158, 166, 175, 184, 193, 202, 219, 229, 242, 254, 258, 12, 21, 32, 39, 48, 56, 61, 62, 71, 79]
icp: nn_idx:  [81, 86, 99, 112, 125, 131, 140, 149, 158, 167, 175, 184, 193, 202, 220, 230, 243, 256, 4, 12, 21, 32, 39, 48, 56, 61, 62, 71, 80]
icp: nn_idx:  [82, 87, 100, 113, 126, 132, 141, 149, 158, 167, 175, 184, 193, 202, 221, 232, 244, 257, 4, 13, 22, 32, 39, 48, 57, 62, 63, 71, 80]
icp: nn_idx:  [82, 87, 100, 113, 126, 132, 141, 150, 158, 167, 176, 184, 193, 202, 222, 232, 245, 258, 5, 14, 22, 32, 40, 48, 57, 62, 63, 72, 80]
icp: nn_idx:  [82, 87, 100, 113, 126, 133, 141, 150, 159, 167, 176, 184, 193, 202, 222, 233, 246, 259, 5, 14, 23, 32, 40, 48, 57, 62, 63, 72, 81]
icp: nn_idx:  [82, 87, 100, 113, 126, 133, 142, 150, 159, 167, 176, 185, 193, 202, 225, 233, 246, 259, 6, 14, 23, 32, 40, 49, 57, 62, 64, 72, 81]
icp: nn_idx:  [82, 88, 100, 113, 126, 133, 142, 150, 159, 167, 176, 185, 193, 202, 225, 233, 246, 259, 6, 14, 23, 32, 40, 49, 57, 62, 64, 72, 81]
icp: nn_idx:  [82, 88, 101, 113, 126, 133, 142, 150, 159, 167, 176, 185, 193, 202, 225, 233, 246, 259, 6, 14, 23, 32, 40, 49, 57, 62, 64, 72, 81]
icp: nn_idx:  [82, 88, 101, 114, 126, 133, 142, 150, 159, 167, 176, 185, 193, 202, 225, 233, 246, 259, 6, 14, 23, 32, 40, 49, 57, 62, 64, 72, 81]
icp: nn_idx:  [82, 88, 101, 114, 127, 133, 142, 150, 159, 167, 176, 185, 193, 202, 225, 233, 246, 259, 6, 14, 23, 32, 40, 49, 57, 62, 64, 72, 81]
icp: nn_idx:  [82, 88, 101, 114, 127, 133, 142, 150, 159, 167, 176, 185, 193, 202, 225, 233, 246, 259, 6, 14, 23, 32, 40, 49, 57, 62, 64, 72, 81]
icp: converged in  13  iteration(s)
icp: nn_idx:  [21, 32, 41, 52, 63, 74, 91, 95, 100, 105, 111, 149, 158, 148, 118, 119, 124, 128, 138, 150, 159, 170, 180, 191, 201, 212, 207, 197, 188, 179, 169, 48, 50, 47, 38, 33, 26, 248, 243, 246, 251, 8, 17]
icp: nn_idx:  [20, 30, 41, 52, 64, 76, 87, 93, 98, 104, 111, 148, 157, 146, 120, 121, 126, 129, 136, 147, 158, 169, 181, 191, 203, 213, 209, 199, 189, 179, 170, 48, 51, 47, 38, 32, 25, 248, 245, 247, 254, 6, 16]

...

icp: nn_idx:  [44, 48, 50, 56, 59, 70, 78, 71, 72, 0, 2, 3, 4, 5, 7, 11, 14, 18, 19, 30, 33, 34, 35, 31, 28, 26, 20, 21, 22, 23, 24, 25, 53, 54, 52, 51, 46, 45, 42, 40, 39, 41, 43]
icp: converged in  5  iteration(s)
icp: nn_idx:  [5, 9, 13, 16, 17, 18, 20, 22, 19, 30, 33, 34, 40, 43, 48, 57, 64, 66, 60, 55, 25, 24, 74, 71, 0, 2, 4]
icp: nn_idx:  [4, 7, 13, 16, 17, 18, 20, 22, 19, 29, 33, 35, 40, 44, 48, 57, 63, 67, 60, 55, 25, 24, 74, 71, 0, 2, 3]
icp: nn_idx:  [4, 7, 12, 16, 17, 18, 20, 22, 19, 29, 32, 35, 40, 44, 48, 57, 63, 67, 60, 55, 25, 24, 74, 71, 0, 1, 3]
icp: nn_idx:  [4, 7, 12, 16, 17, 18, 21, 74, 20, 29, 32, 35, 40, 44, 48, 57, 63, 67, 60, 55, 25, 24, 73, 71, 0, 1, 3]
icp: nn_idx:  [4, 7, 12, 16, 17, 19, 21, 74, 20, 29, 32, 35, 40, 44, 48, 57, 63, 67, 60, 55, 53, 24, 73, 71, 0, 1, 3]
icp: nn_idx:  [4, 7, 12, 16, 17, 19, 21, 74, 20, 28, 32, 35, 40, 44, 48, 57, 63, 67, 60, 55, 53, 24, 73, 71, 0, 1, 3]
icp: nn_idx:  [4, 7, 12, 16, 17, 19, 21, 74, 20, 28, 32, 35, 40, 44, 48, 57, 63, 67, 60, 55, 53, 24, 73, 71, 0, 1, 3]
icp: converged in  6  iteration(s)
plt.figure(figsize=(12.8,20),dpi=100)
plt.subplots_adjust(wspace=2, hspace=1.0)
pltx = 5
plty = np.int(len(ctrs_all[0]) / pltx)
if len(ctrs_all[0]) % pltx:
    plty +=1
# Dictionary containing number of correct answers and number of same labels
results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i in range(len(ctrs_all[0])):
    if det_numbers1[i] == labels1[i]:
        results[labels1[i]][0] += 1
    results[labels1[i]][1] += 1
    title = 'Number:%d\n (' %(det_numbers1[i])
    for s in similarities1[i]:
        title += '%.2f ' %(s)
    title += ')'
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+1), plt.imshow(cv2.cvtColor(subimgs[i],cv2.COLOR_BGR2RGB)), plt.title(title),plt.xticks([]),plt.yticks([])
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+6), plt.imshow(dbg_imgs[i], cmap='gray'),plt.xticks([]),plt.yticks([])
plt.show()
for k,v in results.items():
    print(k, ': ', v[0], ' / ', v[1])

f:id:nokixa:20220223004528p:plain

-1 :  9  /  10
0 :  6  /  6
1 :  7  /  7
2 :  6  /  6
3 :  1  /  1
5 :  6  /  8

まあまあな結果になりました。
"0"、"1"、"2"、"3"の文字については全て正しく判定できています。
"3"は1つしかないので当たり前ですが…

問題なのは、

  • 数字でない輪郭で、数字として認識されてしまったものがある
    見てみると、"白"の文字が"0"と認識されています。ただ、実は"0"の判定は最終的な点数計算には不要なので、ひとまず不問にしておきます。
  • "5"の文字の認識不良
    2つ誤認識があります。1つは輪郭取得が悪くて、"点"の文字までくっついてきてしまったのが原因のようです。もう1つは白い傷かノイズのようなものが入っているような?ただ、一致度は閾値すれすれなので、閾値の調整でなんとかなりそう。

というところです。

ちなみに、まだ初期変換行列での一致度不十分判定は入れていません。
ICPの処理は結構時間がかかっています。
後で閾値決定をします。

他の画像でも点数判定をやっていきます。
画像ごとに、対応する年のテンプレートデータを選びます。

subimgs = []
subctrs = []
det_numbers2 = []
similarities2 = []
dbg_imgs = []
for i in range(len(ctrs_all[1])):
    subimg, subctr = create_contour_area_image(resized_imgs[1], ctrs_all[1], i)
    debug_number= 0 if labels2[i] == -1 else labels2[i]
    det_number, sim, img = determine_number(subctr, binimgs1[0], binimgs1[1:5], subctrs1[1:5], subctrs1_selected_pts, debug_number=debug_number)
    subimgs += [subimg]
    subctrs += [subctr]
    det_numbers2 += [det_number]
    similarities2 += [sim]
    dbg_imgs += [img]
icp: nn_idx:  [208, 220, 230, 240, 251, 3, 12, 20, 29, 37, 45, 54, 62, 71, 79, 88, 99, 109, 112, 141, 150, 158, 165, 175, 184, 189, 190, 198, 206]
icp: nn_idx:  [207, 215, 227, 239, 251, 3, 11, 20, 29, 37, 46, 54, 63, 72, 80, 92, 104, 116, 132, 140, 149, 157, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [207, 214, 227, 240, 252, 3, 12, 20, 29, 37, 46, 54, 65, 72, 82, 94, 107, 120, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 215, 227, 240, 253, 4, 12, 21, 29, 38, 46, 55, 65, 72, 86, 96, 109, 121, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 215, 228, 240, 253, 4, 13, 21, 30, 38, 47, 55, 65, 73, 87, 97, 110, 122, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 215, 228, 240, 254, 4, 13, 22, 30, 39, 47, 56, 65, 73, 87, 97, 110, 123, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 216, 228, 240, 254, 5, 13, 22, 30, 39, 47, 56, 65, 73, 87, 98, 110, 123, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 216, 228, 240, 254, 5, 13, 22, 31, 39, 48, 56, 65, 73, 88, 98, 110, 123, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 216, 228, 240, 254, 5, 14, 22, 31, 39, 48, 56, 65, 73, 88, 98, 111, 123, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 216, 228, 241, 254, 5, 14, 22, 31, 39, 48, 56, 65, 73, 88, 98, 111, 123, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 216, 229, 241, 254, 5, 14, 22, 31, 39, 48, 56, 65, 73, 88, 98, 111, 123, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: nn_idx:  [208, 216, 229, 241, 254, 5, 14, 22, 31, 39, 48, 56, 65, 73, 88, 98, 111, 123, 132, 140, 149, 158, 165, 175, 183, 188, 189, 199, 206]
icp: converged in  11  iteration(s)
icp: nn_idx:  [223, 229, 234, 240, 246, 8, 15, 23, 32, 40, 49, 55, 65, 66, 64, 63, 73, 81, 86, 92, 98, 103, 109, 115, 121, 126, 134, 141, 147, 155, 164, 37, 28, 241, 233, 185, 177, 176, 178, 181, 190, 198, 221]
icp: nn_idx:  [220, 225, 234, 240, 248, 5, 13, 22, 30, 39, 49, 55, 65, 66, 64, 63, 73, 81, 84, 90, 97, 104, 111, 118, 124, 126, 134, 142, 148, 155, 165, 35, 27, 241, 232, 186, 179, 178, 180, 183, 192, 201, 218]
icp: nn_idx:  [219, 225, 233, 240, 248, 4, 12, 21, 30, 39, 48, 54, 65, 66, 64, 63, 73, 81, 84, 90, 97, 104, 111, 118, 125, 126, 134, 142, 148, 156, 165, 35, 27, 241, 231, 187, 180, 179, 181, 184, 193, 203, 217]

...

icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 4, 3, 1, 9]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 9, 4, 3, 1]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 4, 3, 1, 9]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 9, 4, 3, 1]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 4, 3, 1, 9]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 9, 4, 3, 1]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 4, 3, 1, 9]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 9, 4, 3, 1]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 4, 3, 1, 9]
icp: Not converged
icp: nn_idx:  [42, 0, 1, 2, 4, 5, 7, 8, 10, 11, 13, 16, 18, 17, 15, 14, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 9, 6, 3, 40, 38, 36, 35, 37, 33, 39, 41, 12]
icp: nn_idx:  [42, 0, 2, 3, 5, 6, 7, 8, 10, 11, 13, 15, 17, 16, 18, 14, 19, 20, 21, 22, 23, 24, 26, 27, 28, 29, 30, 31, 32, 33, 34, 9, 12, 4, 40, 39, 37, 36, 35, 38, 41, 1, 25]

...

icp: nn_idx:  [21, 22, 23, 24, 25, 27, 28, 30, 32, 33, 35, 36, 38, 40, 42, 0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 4, 41, 39, 37, 34, 11, 13, 15, 31, 26, 19, 18, 16, 14, 17, 20, 12, 29]
icp: nn_idx:  [21, 22, 23, 24, 25, 27, 28, 30, 32, 33, 34, 36, 38, 40, 42, 0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 4, 41, 39, 37, 35, 11, 13, 15, 31, 26, 19, 18, 16, 14, 17, 20, 12, 29]
icp: nn_idx:  [21, 22, 23, 24, 25, 27, 28, 30, 32, 33, 34, 36, 38, 40, 42, 0, 1, 2, 3, 5, 6, 7, 8, 9, 10, 4, 41, 39, 37, 35, 11, 13, 15, 31, 26, 19, 18, 16, 14, 17, 20, 12, 29]
icp: converged in  3  iteration(s)
icp: nn_idx:  [5, 7, 10, 14, 17, 16, 11, 9, 13, 19, 22, 24, 26, 30, 33, 36, 40, 42, 41, 37, 32, 34, 38, 0, 1, 2, 4]
icp: nn_idx:  [5, 7, 10, 14, 17, 16, 11, 9, 13, 19, 22, 24, 26, 30, 33, 36, 40, 42, 41, 37, 32, 34, 38, 0, 1, 3, 4]
icp: nn_idx:  [5, 7, 10, 14, 17, 16, 11, 9, 13, 19, 22, 24, 26, 30, 33, 36, 40, 42, 41, 37, 32, 34, 38, 0, 1, 3, 4]
icp: converged in  2  iteration(s)
plt.figure(figsize=(12.8,20),dpi=100)
plt.subplots_adjust(wspace=2, hspace=1.2)
pltx = 5
plty = np.int(len(ctrs_all[1]) / pltx)
if len(ctrs_all[1]) % pltx:
    plty +=1
# Dictionary containing number of correct answers and number of same labels
results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i in range(len(ctrs_all[1])):
    if det_numbers2[i] == labels2[i]:
        results[labels2[i]][0] += 1
    results[labels2[i]][1] += 1
    title = 'Number:%d\n (' %(det_numbers2[i])
    for s in similarities2[i]:
        title += '%.2f ' %(s)
    title += ')'
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+1), plt.imshow(cv2.cvtColor(subimgs[i],cv2.COLOR_BGR2RGB)), plt.title(title),plt.xticks([]),plt.yticks([])
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+6), plt.imshow(dbg_imgs[i], cmap='gray'),plt.xticks([]),plt.yticks([])
plt.show()
for k,v in results.items():
    print(k, ': ', v[0], ' / ', v[1])

f:id:nokixa:20220223004533p:plain

-1 :  11  /  17
0 :  7  /  7
1 :  7  /  7
2 :  6  /  6
3 :  1  /  1
5 :  7  /  8
subimgs = []
subctrs = []
det_numbers3 = []
similarities3 = []
dbg_imgs = []
for i in range(len(ctrs_all[2])):
    subimg, subctr = create_contour_area_image(resized_imgs[2], ctrs_all[2], i)
    debug_number= 0 if labels3[i] == -1 else labels3[i]
    det_number, sim, img = determine_number(subctr, binimgs3[0], binimgs3[1:5], subctrs3[1:5], subctrs3_selected_pts, debug_number=debug_number)
    subimgs += [subimg]
    subctrs += [subctr]
    det_numbers3 += [det_number]
    similarities3 += [sim]
    dbg_imgs += [img]
icp: nn_idx:  [101, 110, 119, 131, 138, 146, 154, 161, 169, 176, 184, 191, 199, 207, 218, 227, 236, 235, 237, 25, 33, 40, 48, 56, 57, 52, 58, 65, 72, 94]
icp: nn_idx:  [95, 106, 118, 128, 136, 145, 153, 161, 169, 177, 185, 193, 202, 209, 221, 233, 243, 242, 15, 23, 31, 39, 47, 55, 56, 51, 57, 65, 72, 78]
icp: nn_idx:  [94, 105, 118, 128, 137, 145, 153, 161, 169, 177, 185, 193, 202, 213, 224, 236, 247, 246, 15, 23, 31, 39, 47, 55, 56, 51, 57, 64, 72, 78]
icp: nn_idx:  [94, 106, 118, 129, 137, 145, 153, 161, 169, 177, 185, 193, 202, 215, 226, 238, 249, 7, 15, 23, 31, 39, 47, 55, 56, 51, 57, 64, 72, 78]
icp: nn_idx:  [95, 107, 119, 129, 137, 145, 153, 161, 169, 177, 185, 193, 202, 216, 228, 240, 251, 8, 16, 23, 31, 39, 47, 55, 56, 52, 57, 65, 72, 78]
icp: nn_idx:  [95, 107, 120, 129, 137, 145, 153, 161, 169, 177, 185, 193, 202, 217, 229, 241, 252, 8, 16, 24, 32, 40, 48, 56, 57, 52, 58, 65, 72, 78]
icp: nn_idx:  [95, 107, 120, 129, 137, 145, 153, 161, 169, 177, 185, 193, 202, 218, 229, 242, 253, 8, 16, 24, 32, 40, 48, 56, 57, 52, 58, 66, 73, 78]
icp: nn_idx:  [96, 108, 120, 129, 137, 145, 153, 161, 169, 177, 185, 193, 202, 218, 230, 242, 253, 8, 16, 24, 32, 40, 48, 56, 57, 53, 58, 66, 73, 78]
icp: nn_idx:  [96, 108, 120, 129, 137, 145, 153, 161, 169, 177, 185, 193, 202, 218, 230, 243, 253, 8, 16, 24, 32, 40, 48, 56, 57, 53, 58, 66, 73, 78]
icp: nn_idx:  [96, 108, 120, 129, 137, 145, 153, 161, 169, 177, 185, 193, 202, 218, 230, 243, 253, 8, 16, 24, 32, 40, 48, 56, 57, 53, 58, 66, 73, 78]
icp: converged in  9  iteration(s)
icp: nn_idx:  [231, 236, 242, 246, 14, 22, 30, 37, 44, 51, 57, 64, 65, 63, 66, 73, 80, 86, 92, 97, 103, 108, 114, 120, 125, 132, 138, 145, 151, 158, 164, 39, 33, 25, 234, 191, 187, 180, 186, 195, 202, 220, 226]
icp: nn_idx:  [230, 237, 244, 250, 12, 20, 28, 36, 43, 50, 57, 64, 63, 65, 66, 73, 77, 82, 88, 95, 102, 109, 116, 123, 128, 131, 138, 145, 151, 158, 165, 38, 31, 24, 234, 193, 188, 181, 187, 196, 204, 217, 224]

...

icp: nn_idx:  [150, 0, 1, 3, 6, 20, 23, 30, 34, 41, 46, 51, 57, 62, 71, 73, 75, 78, 80, 84, 89, 94, 99, 100, 101, 97, 93, 95, 98, 104, 105, 110, 115, 118, 122, 123, 121, 120, 126, 128, 137, 144, 148]
icp: nn_idx:  [150, 151, 1, 3, 6, 20, 23, 30, 34, 41, 46, 51, 57, 62, 71, 73, 75, 78, 80, 84, 89, 94, 99, 100, 101, 97, 93, 95, 98, 104, 105, 110, 115, 118, 122, 123, 121, 120, 126, 128, 137, 144, 148]
icp: nn_idx:  [150, 151, 1, 3, 6, 20, 23, 30, 34, 41, 46, 51, 57, 62, 71, 73, 75, 78, 80, 84, 89, 94, 99, 100, 101, 97, 93, 95, 98, 104, 105, 110, 115, 118, 122, 123, 121, 120, 126, 128, 137, 144, 148]
icp: converged in  11  iteration(s)
icp: nn_idx:  [82, 83, 88, 94, 95, 96, 98, 99, 101, 106, 130, 132, 137, 144, 150, 1, 6, 10, 18, 25, 23, 122, 123, 118, 113, 115, 34, 38, 45, 51, 58, 64, 72, 77, 81]
icp: nn_idx:  [83, 82, 88, 95, 94, 96, 98, 100, 101, 106, 129, 131, 137, 146, 150, 1, 5, 10, 18, 25, 23, 122, 123, 118, 113, 115, 34, 38, 45, 52, 58, 65, 71, 75, 79]
icp: nn_idx:  [83, 82, 88, 95, 94, 96, 98, 100, 101, 106, 129, 131, 137, 146, 151, 1, 5, 10, 18, 25, 23, 122, 123, 118, 113, 115, 34, 38, 45, 52, 58, 65, 70, 74, 78]
icp: nn_idx:  [82, 83, 88, 95, 94, 96, 98, 100, 101, 106, 129, 131, 137, 146, 151, 1, 5, 10, 18, 25, 23, 122, 123, 119, 113, 115, 34, 38, 45, 52, 58, 65, 70, 74, 78]
icp: nn_idx:  [83, 82, 89, 95, 94, 96, 98, 100, 101, 106, 129, 131, 137, 146, 151, 1, 5, 10, 18, 25, 23, 122, 123, 119, 113, 115, 34, 38, 45, 52, 58, 65, 70, 74, 78]
icp: nn_idx:  [83, 82, 89, 95, 94, 96, 98, 100, 101, 106, 129, 131, 137, 146, 151, 1, 5, 10, 18, 25, 23, 122, 123, 119, 113, 115, 34, 38, 45, 52, 58, 65, 70, 74, 78]
icp: converged in  5  iteration(s)
plt.figure(figsize=(12.8,20),dpi=100)
plt.subplots_adjust(wspace=2, hspace=1.2)
pltx = 5
plty = np.int(len(ctrs_all[2]) / pltx)
if len(ctrs_all[2]) % pltx:
    plty +=1
# Dictionary containing number of correct answers and number of same labels
results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i in range(len(ctrs_all[2])):
    if det_numbers3[i] == labels3[i]:
        results[labels3[i]][0] += 1
    results[labels3[i]][1] += 1
    title = 'Number:%d\n (' %(det_numbers3[i])
    for s in similarities3[i]:
        title += '%.2f ' %(s)
    title += ')'
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+1), plt.imshow(cv2.cvtColor(subimgs[i],cv2.COLOR_BGR2RGB)), plt.title(title),plt.xticks([]),plt.yticks([])
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+6), plt.imshow(dbg_imgs[i], cmap='gray'),plt.xticks([]),plt.yticks([])
plt.show()
for k,v in results.items():
    print(k, ': ', v[0], ' / ', v[1])

f:id:nokixa:20220223004538p:plain

-1 :  18  /  21
0 :  4  /  4
1 :  13  /  15
2 :  4  /  4
3 :  0  /  0
5 :  4  /  4
subimgs = []
subctrs = []
det_numbers4 = []
similarities4 = []
dbg_imgs = []
for i in range(len(ctrs_all[3])):
    subimg, subctr = create_contour_area_image(resized_imgs[3], ctrs_all[3], i)
    debug_number= 0 if labels4[i] == -1 else labels4[i]
    det_number, sim, img = determine_number(subctr, binimgs3[0], binimgs3[1:5], subctrs3[1:5], subctrs3_selected_pts, debug_number=debug_number)
    subimgs += [subimg]
    subctrs += [subctr]
    det_numbers4 += [det_number]
    similarities4 += [sim]
    dbg_imgs += [img]
icp: nn_idx:  [32, 34, 36, 38, 39, 41, 44, 45, 47, 51, 53, 55, 1, 3, 5, 6, 9, 10, 12, 15, 50, 48, 46, 26, 25, 23, 24, 27, 28, 30]
icp: nn_idx:  [32, 34, 36, 37, 39, 42, 44, 45, 47, 51, 53, 55, 1, 3, 5, 7, 9, 10, 12, 15, 50, 48, 46, 26, 25, 23, 24, 27, 28, 30]
icp: nn_idx:  [32, 34, 35, 37, 39, 42, 44, 45, 47, 51, 53, 54, 1, 3, 5, 7, 9, 10, 12, 15, 50, 48, 46, 26, 25, 23, 24, 27, 28, 30]
icp: nn_idx:  [32, 34, 35, 37, 39, 42, 43, 45, 47, 51, 53, 54, 1, 3, 5, 7, 9, 10, 12, 15, 50, 48, 46, 26, 25, 23, 24, 27, 28, 30]
icp: nn_idx:  [32, 34, 35, 37, 38, 42, 43, 45, 47, 51, 53, 54, 1, 3, 5, 7, 9, 10, 12, 15, 50, 48, 46, 26, 25, 23, 24, 27, 28, 30]
icp: nn_idx:  [32, 34, 35, 37, 38, 42, 43, 45, 47, 51, 53, 54, 1, 3, 5, 7, 9, 10, 12, 15, 50, 48, 23, 26, 25, 22, 24, 27, 28, 30]
icp: nn_idx:  [32, 33, 35, 37, 38, 42, 43, 45, 47, 51, 53, 54, 1, 3, 5, 7, 9, 10, 12, 15, 50, 20, 23, 26, 25, 22, 24, 27, 28, 30]
icp: nn_idx:  [31, 33, 35, 37, 38, 42, 43, 45, 47, 51, 53, 54, 0, 3, 6, 7, 9, 11, 12, 15, 19, 20, 23, 26, 25, 22, 24, 27, 28, 30]
icp: nn_idx:  [31, 33, 35, 37, 38, 42, 43, 45, 47, 51, 52, 54, 0, 3, 6, 8, 10, 11, 13, 15, 19, 20, 23, 25, 26, 22, 24, 27, 28, 29]
icp: nn_idx:  [31, 33, 35, 37, 38, 42, 43, 45, 47, 49, 52, 54, 0, 3, 7, 8, 10, 12, 13, 15, 19, 20, 23, 25, 26, 22, 24, 27, 28, 29]
icp: nn_idx:  [31, 33, 35, 37, 38, 42, 43, 45, 47, 49, 52, 54, 0, 3, 7, 9, 11, 12, 13, 15, 19, 20, 23, 25, 26, 22, 24, 27, 28, 29]
icp: nn_idx:  [31, 33, 35, 37, 38, 42, 43, 45, 47, 49, 52, 54, 0, 3, 7, 9, 11, 12, 13, 15, 19, 20, 23, 25, 26, 22, 24, 27, 28, 29]
icp: converged in  11  iteration(s)
icp: nn_idx:  [7, 8, 10, 11, 13, 15, 17, 19, 21, 23, 26, 27, 28, 25, 24, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 42, 43, 46, 48, 50, 51, 14, 12, 54, 52, 53, 55, 0, 2, 4, 6]
icp: nn_idx:  [7, 8, 9, 11, 13, 15, 17, 19, 20, 23, 26, 27, 25, 28, 24, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 46, 48, 50, 51, 12, 10, 54, 53, 52, 55, 0, 2, 4, 6]
icp: nn_idx:  [7, 8, 9, 11, 13, 15, 17, 19, 20, 23, 26, 27, 25, 28, 24, 29, 30, 31, 32, 33, 34, 36, 37, 38, 39, 40, 41, 42, 44, 46, 48, 50, 51, 12, 10, 54, 53, 52, 55, 0, 2, 4, 6]

...

icp: nn_idx:  [2, 4, 7, 10, 13, 25, 30, 35, 39, 46, 51, 56, 61, 66, 75, 77, 79, 82, 84, 87, 93, 97, 103, 102, 104, 100, 96, 98, 99, 106, 108, 112, 117, 119, 125, 126, 127, 122, 123, 130, 131, 139, 0]
icp: nn_idx:  [2, 4, 7, 10, 12, 25, 30, 35, 39, 46, 51, 56, 61, 66, 75, 77, 79, 82, 84, 87, 93, 98, 103, 102, 104, 100, 96, 97, 99, 106, 108, 112, 117, 119, 125, 126, 127, 123, 122, 130, 131, 139, 0]
icp: nn_idx:  [2, 4, 7, 10, 12, 25, 30, 35, 39, 46, 51, 56, 61, 66, 75, 77, 79, 82, 84, 87, 93, 98, 103, 102, 104, 100, 96, 97, 99, 106, 108, 112, 117, 119, 125, 126, 127, 123, 122, 130, 131, 139, 0]
icp: converged in  9  iteration(s)
icp: nn_idx:  [84, 85, 89, 95, 96, 97, 100, 99, 102, 105, 112, 120, 132, 137, 3, 7, 12, 17, 22, 28, 30, 125, 124, 119, 114, 39, 38, 41, 47, 54, 61, 68, 73, 77, 81]
icp: nn_idx:  [83, 85, 89, 96, 95, 97, 56, 100, 102, 106, 112, 120, 131, 138, 4, 8, 12, 17, 21, 29, 30, 125, 124, 119, 114, 39, 38, 42, 47, 55, 61, 67, 73, 77, 81]
icp: nn_idx:  [83, 85, 89, 96, 95, 97, 56, 100, 102, 106, 113, 120, 130, 138, 5, 8, 12, 17, 22, 29, 30, 125, 124, 119, 114, 39, 38, 42, 47, 55, 61, 68, 73, 76, 80]
icp: nn_idx:  [82, 85, 89, 96, 95, 97, 56, 100, 102, 106, 113, 120, 130, 138, 5, 8, 12, 17, 23, 29, 30, 125, 124, 119, 114, 39, 38, 42, 48, 55, 61, 68, 72, 76, 80]
icp: nn_idx:  [82, 84, 90, 96, 95, 97, 56, 100, 102, 106, 113, 120, 130, 138, 5, 8, 12, 17, 23, 29, 31, 125, 124, 119, 114, 39, 38, 42, 49, 55, 62, 68, 72, 76, 80]
icp: nn_idx:  [82, 84, 90, 96, 95, 97, 56, 100, 102, 106, 113, 120, 130, 138, 5, 8, 12, 17, 23, 29, 31, 125, 124, 119, 114, 39, 38, 42, 49, 55, 62, 68, 72, 76, 80]
icp: converged in  5  iteration(s)
plt.figure(figsize=(12.8,20),dpi=100)
plt.subplots_adjust(wspace=2, hspace=1.2)
pltx = 5
plty = np.int(len(ctrs_all[3]) / pltx)
if len(ctrs_all[3]) % pltx:
    plty +=1
# Dictionary containing number of correct answers and number of same labels
results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i in range(len(ctrs_all[3])):
    if det_numbers4[i] == labels4[i]:
        results[labels4[i]][0] += 1
    results[labels4[i]][1] += 1
    title = 'Number:%d\n (' %(det_numbers4[i])
    for s in similarities4[i]:
        title += '%.2f ' %(s)
    title += ')'
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+1), plt.imshow(cv2.cvtColor(subimgs[i],cv2.COLOR_BGR2RGB)), plt.title(title),plt.xticks([]),plt.yticks([])
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+6), plt.imshow(dbg_imgs[i], cmap='gray'),plt.xticks([]),plt.yticks([])
plt.show()
for k,v in results.items():
    print(k, ': ', v[0], ' / ', v[1])

f:id:nokixa:20220223004543p:plain

-1 :  14  /  22
0 :  2  /  2
1 :  10  /  11
2 :  4  /  4
3 :  0  /  0
5 :  1  /  2
subimgs = []
subctrs = []
det_numbers5 = []
similarities5 = []
dbg_imgs = []
for i in range(len(ctrs_all[4])):
    subimg, subctr = create_contour_area_image(resized_imgs[4], ctrs_all[4], i)
    debug_number= 0 if labels5[i] == -1 else labels5[i]
    det_number, sim, img = determine_number(subctr, binimgs5[0], binimgs5[1:5], subctrs5[1:5], subctrs5_selected_pts, debug_number=debug_number)
    subimgs += [subimg]
    subctrs += [subctr]
    det_numbers5 += [det_number]
    similarities5 += [sim]
    dbg_imgs += [img]
icp: nn_idx:  [44, 47, 50, 53, 56, 63, 67, 72, 78, 85, 89, 91, 3, 18, 28, 38, 37, 42]
icp: nn_idx:  [44, 47, 50, 53, 57, 63, 67, 72, 78, 84, 89, 91, 4, 18, 28, 38, 37, 42]
icp: nn_idx:  [44, 48, 50, 53, 57, 63, 67, 72, 78, 84, 89, 91, 4, 18, 31, 38, 37, 42]
icp: nn_idx:  [44, 48, 50, 53, 57, 63, 67, 72, 78, 84, 89, 92, 4, 18, 31, 38, 37, 42]
icp: nn_idx:  [44, 48, 50, 53, 57, 63, 67, 72, 78, 84, 89, 92, 4, 18, 31, 38, 37, 42]
icp: converged in  4  iteration(s)
icp: nn_idx:  [83, 88, 89, 90, 92, 3, 6, 24, 32, 35, 38, 37, 40, 43, 46, 47, 48, 49, 51, 53, 54, 56, 59, 62, 29, 28, 19, 10, 11, 13, 74, 79]

...

icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 22, 21, 20]
icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 20, 21, 22]
icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 22, 21, 20]
icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 20, 21, 22]
icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 22, 21, 20]
icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 20, 21, 22]
icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 22, 21, 20]
icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 20, 21, 22]
icp: nn_idx:  [24, 25, 27, 29, 31, 32, 35, 37, 40, 41, 44, 45, 43, 46, 0, 1, 2, 4, 6, 8, 10, 11, 12, 14, 18, 39, 33, 30, 23, 22, 21, 20]
icp: Not converged
icp: nn_idx:  [1, 2, 4, 5, 7, 9, 11, 13, 14, 16, 18, 19, 21, 22, 24, 25, 26, 28, 29, 31, 32, 34, 36, 35, 33, 30, 27, 23, 20, 17, 39, 40, 41, 12, 6, 3, 46, 44, 42, 43, 45, 0, 8]
icp: nn_idx:  [1, 3, 5, 6, 7, 8, 10, 12, 14, 16, 17, 20, 21, 22, 24, 25, 27, 28, 29, 31, 32, 34, 35, 36, 33, 30, 26, 23, 19, 18, 39, 40, 41, 13, 9, 4, 46, 44, 43, 42, 45, 0, 2]
icp: nn_idx:  [1, 3, 5, 6, 7, 8, 10, 12, 14, 16, 17, 20, 21, 22, 24, 25, 27, 28, 29, 31, 32, 34, 35, 36, 33, 30, 26, 23, 19, 18, 39, 40, 41, 13, 9, 4, 46, 44, 43, 42, 45, 0, 2]
icp: converged in  2  iteration(s)
icp: nn_idx:  [18, 20, 19, 14, 13, 17, 23, 27, 29, 31, 34, 39, 43, 46, 0, 44, 40, 37, 41, 2, 3, 5, 7, 10, 15]
icp: nn_idx:  [17, 20, 19, 14, 13, 18, 23, 27, 29, 31, 34, 38, 43, 45, 0, 44, 39, 40, 41, 2, 3, 5, 7, 10, 15]

...

icp: nn_idx:  [0, 2, 4, 6, 8, 17, 21, 24, 27, 31, 34, 37, 40, 44, 48, 50, 52, 54, 56, 58, 61, 65, 68, 69, 67, 66, 63, 64, 36, 70, 71, 74, 77, 78, 81, 82, 83, 80, 84, 86, 88, 91, 94]
icp: nn_idx:  [0, 2, 4, 6, 8, 17, 21, 24, 27, 31, 34, 37, 40, 44, 48, 50, 52, 54, 56, 58, 61, 65, 68, 69, 67, 66, 63, 64, 36, 70, 71, 74, 77, 78, 81, 82, 83, 80, 84, 86, 88, 91, 94]
icp: converged in  8  iteration(s)
icp: nn_idx:  [56, 59, 64, 65, 66, 67, 70, 75, 86, 90, 0, 6, 10, 14, 20, 21, 81, 77, 74, 29, 32, 38, 44, 49, 54]
icp: nn_idx:  [56, 59, 64, 65, 66, 67, 70, 75, 86, 90, 0, 6, 10, 14, 20, 21, 81, 77, 74, 29, 32, 38, 44, 50, 54]
icp: nn_idx:  [56, 59, 64, 65, 66, 67, 70, 75, 86, 90, 1, 6, 10, 14, 20, 21, 81, 77, 74, 29, 32, 38, 44, 50, 54]
icp: nn_idx:  [56, 59, 64, 65, 66, 67, 70, 75, 86, 90, 1, 6, 10, 14, 20, 21, 81, 77, 74, 29, 32, 38, 44, 50, 54]
icp: converged in  3  iteration(s)
plt.figure(figsize=(12.8,20),dpi=100)
plt.subplots_adjust(wspace=2, hspace=1.2)
pltx = 5
plty = np.int(len(ctrs_all[4]) / pltx)
if len(ctrs_all[4]) % pltx:
    plty +=1
# Dictionary containing number of correct answers and number of same labels
results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i in range(len(ctrs_all[4])):
    if det_numbers5[i] == labels5[i]:
        results[labels5[i]][0] += 1
    results[labels5[i]][1] += 1
    title = 'Number:%d\n (' %(det_numbers5[i])
    for s in similarities5[i]:
        title += '%.2f ' %(s)
    title += ')'
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+1), plt.imshow(cv2.cvtColor(subimgs[i],cv2.COLOR_BGR2RGB)), plt.title(title),plt.xticks([]),plt.yticks([])
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+6), plt.imshow(dbg_imgs[i], cmap='gray'),plt.xticks([]),plt.yticks([])
plt.show()
for k,v in results.items():
    print(k, ': ', v[0], ' / ', v[1])

f:id:nokixa:20220223004548p:plain

-1 :  5  /  6
0 :  4  /  4
1 :  14  /  16
2 :  4  /  5
3 :  0  /  0
5 :  2  /  4
subimgs = []
subctrs = []
det_numbers6 = []
similarities6 = []
dbg_imgs = []
for i in range(len(ctrs_all[5])):
    subimg, subctr = create_contour_area_image(resized_imgs[5], ctrs_all[5], i)
    debug_number= 0 if labels6[i] == -1 else labels6[i]
    det_number, sim, img = determine_number(subctr, binimgs5[0], binimgs5[1:5], subctrs5[1:5], subctrs5_selected_pts, debug_number=debug_number)
    subimgs += [subimg]
    subctrs += [subctr]
    det_numbers6 += [det_number]
    similarities6 += [sim]
    dbg_imgs += [img]
icp: nn_idx:  [41, 45, 48, 51, 56, 65, 73, 82, 97, 103, 105, 109, 119, 85, 21, 32, 34, 38]
icp: nn_idx:  [41, 45, 48, 51, 56, 65, 73, 82, 97, 103, 105, 109, 119, 10, 21, 32, 34, 38]
icp: nn_idx:  [41, 45, 48, 51, 56, 65, 73, 82, 97, 103, 105, 109, 0, 10, 21, 32, 34, 38]
icp: nn_idx:  [41, 45, 48, 51, 56, 65, 73, 82, 97, 103, 105, 109, 0, 10, 21, 32, 34, 38]
icp: converged in  3  iteration(s)
icp: nn_idx:  [40, 46, 47, 48, 52, 56, 63, 70, 76, 88, 92, 94, 91, 99, 103, 104, 105, 108, 109, 110, 111, 0, 2, 85, 16, 72, 68, 59, 34, 30, 33, 38]
icp: nn_idx:  [41, 46, 47, 48, 53, 57, 63, 71, 76, 88, 92, 91, 95, 99, 102, 103, 105, 108, 109, 110, 111, 0, 3, 85, 16, 72, 68, 59, 34, 32, 33, 38]
icp: nn_idx:  [41, 46, 47, 48, 53, 57, 64, 71, 76, 88, 92, 91, 95, 99, 102, 103, 105, 108, 109, 110, 111, 0, 3, 85, 16, 72, 68, 59, 34, 32, 33, 38]
icp: nn_idx:  [41, 46, 47, 48, 53, 57, 64, 71, 76, 88, 92, 91, 95, 99, 102, 103, 105, 108, 109, 110, 111, 0, 3, 85, 16, 72, 68, 59, 34, 32, 33, 38]
icp: converged in  3  iteration(s)
icp: nn_idx:  [12, 21, 30, 34, 39, 42, 45, 46, 37, 48, 49, 50, 52, 55, 61, 66, 73, 81, 90, 98, 101, 103, 105, 115, 92, 83, 76, 72, 68, 27, 78, 16, 15, 26, 25, 24, 14, 85, 3, 112, 111, 0, 8]

...

icp: nn_idx:  [74, 75, 77, 79, 0, 9, 12, 15, 17, 20, 23, 25, 28, 31, 33, 35, 36, 38, 40, 43, 45, 48, 51, 50, 52, 49, 47, 27, 26, 53, 54, 56, 59, 60, 62, 63, 64, 61, 65, 66, 67, 70, 72]
icp: nn_idx:  [73, 75, 77, 79, 0, 10, 12, 15, 17, 20, 23, 25, 28, 31, 33, 35, 36, 38, 40, 43, 45, 48, 51, 50, 52, 49, 47, 27, 26, 53, 54, 56, 59, 60, 62, 63, 64, 61, 65, 66, 67, 69, 72]
icp: nn_idx:  [73, 75, 77, 78, 0, 10, 12, 15, 17, 20, 23, 25, 28, 31, 33, 34, 36, 38, 40, 43, 45, 48, 51, 50, 52, 49, 47, 27, 26, 53, 54, 56, 59, 60, 62, 63, 64, 61, 65, 66, 67, 69, 72]
icp: nn_idx:  [73, 75, 77, 78, 0, 10, 12, 15, 17, 20, 23, 25, 28, 31, 33, 34, 36, 38, 40, 43, 45, 48, 51, 50, 52, 49, 47, 27, 26, 53, 54, 56, 59, 60, 62, 63, 64, 61, 65, 66, 67, 69, 72]
icp: converged in  7  iteration(s)
icp: nn_idx:  [41, 44, 48, 49, 51, 50, 54, 57, 67, 71, 75, 79, 1, 6, 11, 12, 62, 59, 17, 19, 21, 27, 31, 34, 38]
icp: nn_idx:  [41, 44, 48, 49, 51, 50, 53, 57, 67, 71, 75, 79, 2, 6, 11, 12, 62, 59, 58, 17, 21, 26, 31, 35, 39]
icp: nn_idx:  [42, 44, 48, 49, 51, 50, 53, 57, 67, 70, 75, 79, 1, 6, 11, 12, 62, 59, 58, 17, 21, 26, 31, 36, 40]
icp: nn_idx:  [42, 44, 48, 49, 51, 50, 53, 57, 67, 70, 75, 79, 1, 6, 11, 12, 62, 59, 58, 17, 21, 26, 31, 36, 40]
icp: converged in  3  iteration(s)
plt.figure(figsize=(12.8,20),dpi=100)
plt.subplots_adjust(wspace=2, hspace=1.2)
pltx = 5
plty = np.int(len(ctrs_all[5]) / pltx)
if len(ctrs_all[5]) % pltx:
    plty +=1
# Dictionary containing number of correct answers and number of same labels
results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i in range(len(ctrs_all[5])):
    if det_numbers6[i] == labels6[i]:
        results[labels6[i]][0] += 1
    results[labels6[i]][1] += 1
    title = 'Number:%d\n (' %(det_numbers6[i])
    for s in similarities6[i]:
        title += '%.2f ' %(s)
    title += ')'
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+1), plt.imshow(cv2.cvtColor(subimgs[i],cv2.COLOR_BGR2RGB)), plt.title(title),plt.xticks([]),plt.yticks([])
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+6), plt.imshow(dbg_imgs[i], cmap='gray'),plt.xticks([]),plt.yticks([])
plt.show()
for k,v in results.items():
    print(k, ': ', v[0], ' / ', v[1])

f:id:nokixa:20220223004553p:plain

-1 :  4  /  4
0 :  4  /  4
1 :  7  /  16
2 :  4  /  5
3 :  0  /  0
5 :  0  /  4
subimgs = []
subctrs = []
det_numbers7 = []
similarities7 = []
dbg_imgs = []
for i in range(len(ctrs_all[6])):
    subimg, subctr = create_contour_area_image(resized_imgs[6], ctrs_all[6], i)
    debug_number= 0 if labels7[i] == -1 else labels7[i]
    det_number, sim, img = determine_number(subctr, binimgs5[0], binimgs5[1:5], subctrs5[1:5], subctrs5_selected_pts, debug_number=debug_number)
    subimgs += [subimg]
    subctrs += [subctr]
    det_numbers7 += [det_number]
    similarities7 += [sim]
    dbg_imgs += [img]
icp: nn_idx:  [141, 154, 176, 1, 9, 15, 20, 26, 33, 40, 63, 82, 84, 79, 157, 158, 130, 133]
icp: nn_idx:  [140, 153, 175, 1, 8, 14, 20, 27, 33, 41, 62, 82, 84, 79, 157, 158, 145, 135]
icp: nn_idx:  [141, 154, 176, 1, 8, 14, 21, 27, 34, 41, 62, 81, 83, 79, 157, 158, 146, 137]
icp: nn_idx:  [142, 154, 176, 1, 8, 15, 21, 28, 34, 41, 62, 81, 83, 79, 158, 159, 146, 137]
icp: nn_idx:  [142, 155, 176, 1, 8, 15, 21, 28, 35, 41, 62, 81, 83, 79, 158, 159, 146, 137]
icp: nn_idx:  [143, 155, 176, 1, 8, 15, 21, 28, 35, 41, 62, 81, 83, 79, 158, 159, 146, 138]
icp: nn_idx:  [143, 155, 177, 1, 8, 15, 21, 28, 35, 41, 62, 81, 79, 78, 158, 159, 146, 138]
icp: nn_idx:  [143, 155, 177, 1, 8, 15, 21, 28, 35, 41, 61, 80, 79, 78, 159, 160, 146, 138]
icp: nn_idx:  [144, 155, 177, 1, 8, 15, 21, 28, 35, 41, 61, 79, 78, 77, 159, 160, 146, 139]
icp: nn_idx:  [144, 156, 177, 1, 9, 15, 21, 28, 35, 41, 61, 79, 78, 77, 160, 159, 146, 139]
icp: nn_idx:  [144, 156, 177, 1, 9, 15, 21, 28, 35, 41, 61, 79, 78, 77, 160, 159, 146, 140]
icp: nn_idx:  [144, 156, 177, 1, 9, 15, 21, 28, 35, 41, 61, 78, 79, 77, 160, 161, 146, 140]
icp: nn_idx:  [145, 156, 177, 1, 9, 15, 21, 28, 35, 41, 61, 78, 79, 77, 160, 161, 146, 140]
icp: nn_idx:  [145, 156, 177, 1, 9, 15, 21, 28, 35, 41, 61, 78, 79, 77, 160, 161, 146, 140]
icp: converged in  13  iteration(s)
icp: nn_idx:  [146, 157, 167, 177, 189, 0, 13, 18, 25, 53, 60, 51, 30, 34, 41, 52, 62, 72, 83, 93, 104, 99, 91, 79, 74, 176, 181, 175, 163, 156, 128, 132]


...

icp: nn_idx:  [1, 3, 5, 7, 9, 18, 22, 25, 29, 32, 36, 39, 43, 46, 50, 52, 54, 56, 58, 62, 66, 69, 73, 72, 71, 70, 67, 68, 38, 74, 75, 78, 82, 83, 86, 87, 88, 85, 89, 91, 93, 96, 0]
icp: nn_idx:  [1, 3, 5, 7, 9, 18, 22, 25, 29, 32, 36, 39, 43, 46, 50, 52, 54, 56, 58, 62, 66, 69, 73, 72, 71, 70, 67, 68, 38, 74, 75, 78, 82, 83, 86, 87, 88, 85, 89, 91, 93, 96, 0]
icp: converged in  8  iteration(s)
icp: nn_idx:  [59, 63, 68, 69, 70, 71, 74, 79, 91, 95, 2, 6, 10, 14, 21, 22, 86, 82, 78, 30, 33, 40, 46, 52, 56]
icp: nn_idx:  [59, 62, 68, 69, 70, 71, 75, 79, 91, 95, 2, 6, 10, 15, 21, 22, 86, 82, 78, 30, 34, 40, 46, 53, 57]
icp: nn_idx:  [59, 63, 68, 67, 70, 71, 75, 79, 91, 95, 2, 7, 10, 15, 21, 22, 86, 82, 78, 31, 34, 40, 46, 53, 57]
icp: nn_idx:  [59, 63, 68, 67, 70, 71, 75, 79, 91, 95, 2, 7, 10, 15, 22, 21, 86, 82, 78, 31, 34, 41, 47, 53, 57]
icp: nn_idx:  [59, 63, 68, 67, 70, 71, 75, 80, 91, 96, 2, 7, 10, 15, 22, 21, 86, 82, 78, 31, 34, 41, 47, 53, 57]
icp: nn_idx:  [59, 63, 68, 67, 70, 71, 75, 80, 91, 97, 2, 7, 10, 15, 22, 21, 86, 82, 78, 31, 34, 41, 47, 53, 57]
icp: nn_idx:  [59, 63, 68, 67, 70, 71, 75, 80, 91, 97, 2, 7, 10, 15, 22, 21, 86, 82, 78, 31, 34, 41, 47, 53, 57]
icp: converged in  6  iteration(s)
plt.figure(figsize=(12.8,20),dpi=100)
plt.subplots_adjust(wspace=2, hspace=1.2)
pltx = 5
plty = np.int(len(ctrs_all[6]) / pltx)
if len(ctrs_all[6]) % pltx:
    plty +=1
# Dictionary containing number of correct answers and number of same labels
results = {-1:[0,0], 0:[0,0], 1:[0,0], 2:[0,0], 3:[0,0], 5:[0,0]}
for i in range(len(ctrs_all[6])):
    if det_numbers7[i] == labels7[i]:
        results[labels7[i]][0] += 1
    results[labels7[i]][1] += 1
    title = 'Number:%d\n (' %(det_numbers7[i])
    for s in similarities7[i]:
        title += '%.2f ' %(s)
    title += ')'
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+1), plt.imshow(cv2.cvtColor(subimgs[i],cv2.COLOR_BGR2RGB)), plt.title(title),plt.xticks([]),plt.yticks([])
    plt.subplot(plty*2, pltx, np.int(i/5)*10+i%5+6), plt.imshow(dbg_imgs[i], cmap='gray'),plt.xticks([]),plt.yticks([])
plt.show()
for k,v in results.items():
    print(k, ': ', v[0], ' / ', v[1])

f:id:nokixa:20220223004558p:plain

-1 :  8  /  9
0 :  0  /  0
1 :  6  /  6
2 :  4  /  9
3 :  0  /  0
5 :  0  /  0

ざっと全画像で試してみた感じ、うまく検出できていないところもありますが、なんとか閾値等調整すればいけそうな気が。

あと、時々ICPが収束しないことがありました。
様子を見てみると、最近傍点セットが2つのパターンを繰り返しています。
下に一部取り出して掲載します。
収束条件も後で見直します。

icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 9, 4, 3, 1]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 4, 3, 1, 9]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 9, 4, 3, 1]
icp: nn_idx:  [5, 8, 11, 17, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 37, 42, 41, 0, 40, 39, 2, 6, 7, 4, 3, 1, 9]
icp: Not converged

一致度の分布を確認

上で見た各数字への一致度を、ヒストグラムで出してみたいと思います。
データは一致データ、不一致データの2要素に分けて表示します。
また、年ごとの点数文字フォントの違いで状況も違うかもしれないので、別々に表示します。

labels = labels1 + labels2
sims = similarities1 + similarities2
numbers = [0,1,2,3,5]
plt.figure(figsize=(20, 4.8), dpi=100)
plt.suptitle('Year 2019')
for i,n in enumerate(numbers):
    t = [s[i] for j,s in enumerate(sims) if labels[j]==n]
    f = [s[i] for j,s in enumerate(sims) if labels[j]!=n]
    plt.subplot(1,5,1+i), plt.hist([t,f], 20, [0.5,1.0], stacked=False, color=['orange', 'green'])
    plt.title('Number: %d' %(n))
plt.show()

f:id:nokixa:20220223004602p:plain

labels = labels3 + labels4
sims = similarities3 + similarities4
plt.figure(figsize=(20, 4.8), dpi=100)
plt.suptitle('Year 2020')
for i,n in enumerate(numbers):
    t = [s[i] for j,s in enumerate(sims) if labels[j]==n]
    f = [s[i] for j,s in enumerate(sims) if labels[j]!=n]
    plt.subplot(1,5,1+i), plt.hist([t,f], 20, [0.5,1.0], stacked=False, color=['orange', 'green'])
    plt.title('Number: %d' %(n))
plt.show()

f:id:nokixa:20220223004605p:plain

labels = labels5 + labels6 + labels7
sims = similarities5 + similarities6 + similarities7
plt.figure(figsize=(20, 4.8), dpi=100)
plt.suptitle('Year 2021')
for i,n in enumerate(numbers):
    t = [s[i] for j,s in enumerate(sims) if labels[j]==n]
    f = [s[i] for j,s in enumerate(sims) if labels[j]!=n]
    plt.subplot(1,5,1+i), plt.hist([t,f], 20, [0.5,1.0], stacked=False, color=['orange', 'green'])
    plt.title('Number: %d' %(n))
plt.show()

f:id:nokixa:20220223004608p:plain

一致データと不一致データとで、一致度がきっちり分かれているものもありますが、そうでないものもあります。
そうなると確実な閾値が設定できない…特に"1"と"5"が難しい。

一旦ここまで

長くなったのでここで区切ります。
Jupyter notebook的にはまだ続いているので、次回は今回の結果を引き継いだ状態でスタートします。

OpenCVやってみる - 34. "0"の文字比較

連続の投稿になりますが、続きです。
2月1日から春のパン祭りスタートらしいので、ペースアップ中です…

今回は前回の記事と同じJupyter notebookでやっているので、画像読み込みなどの下準備は省きます。

楕円近似による"0"の判定

"0"の文字の判定についてです。
前回考えた手法を再掲します。

  • 楕円で近似、近似した楕円とテンプレートマッチングを実施、一致度が閾値以上であれば"0"の文字であると判定

ひとまずテンプレート選択に使用したのと同じ画像を使って試してみます。
手順は、

  • 各輪郭について、楕円近似を行う
  • 各輪郭周辺の小画像を用意する
  • 輪郭の塗りつぶし画像、近似楕円の塗りつぶし画像を用意する
  • これらにテンプレートマッチングを適用、一致度を確認する

という形です。

def check_degree_of_ellipse(ctr):
    # Fit ellipse
    ellipse = cv2.fitEllipse(ctr)
    # The area to compare: straight bounding rectangle of the ellipse
    bound = cv2.boundingRect(ctr)
    # Create solid contour image
    ## Prepare image data array
    solid_contour = np.zeros((bound[3],bound[2]), 'uint8')
    ## Move origin of contour points to the corner of the bounding rectangle
    ctr = ctr - bound[0:2]
    solid_contour = cv2.drawContours(solid_contour, [ctr], -1, 255,-1)
    # Create solid ellipse image
    ## Move position of the ellipse to the corner of the bounding rectangle
    ellipse2 = ((ellipse[0][0] - bound[0], ellipse[0][1] - bound[1]), ellipse[1], ellipse[2])
    solid_ellipse = np.zeros((bound[3],bound[2]), 'uint8')
    solid_ellipse = cv2.ellipse(solid_ellipse, ellipse2, 255, -1)
    degree = cv2.matchTemplate(solid_contour.copy(), solid_ellipse, cv2.TM_CCORR_NORMED)
    return degree, solid_contour, solid_ellipse
for i, ctr in enumerate(ctrs1[0:20]):
    deg, solid_contour, solid_ellipse = check_degree_of_ellipse(ctr)
    print("No. ", i, ": ", deg)
    plt.figure(figsize=(3.2,2.4), dpi=100)
    plt.subplot(121), plt.imshow(solid_contour, cmap='gray'), plt.title('Original'), plt.xticks([]), plt.yticks([])
    plt.subplot(122), plt.imshow(solid_ellipse, cmap='gray'), plt.title('Fitted ellipse'), plt.xticks([]), plt.yticks([])
    plt.show()

No. 0 : [[0.96151537]]

f:id:nokixa:20220124025629p:plain

No. 1 : [[0.9259069]]

f:id:nokixa:20220124025631p:plain

No. 2 : [[0.9137973]]

f:id:nokixa:20220124025633p:plain

No. 3 : [[0.90786487]]

f:id:nokixa:20220124025635p:plain

No. 4 : [[0.9035319]]

f:id:nokixa:20220124025637p:plain

No. 5 : [[0.95029587]]

f:id:nokixa:20220124025548p:plain

No. 6 : [[0.8379881]]

f:id:nokixa:20220124025550p:plain

No. 7 : [[0.9888445]]

f:id:nokixa:20220124025552p:plain

No. 8 : [[0.8417428]]

f:id:nokixa:20220124025554p:plain

No. 9 : [[0.8790744]]

f:id:nokixa:20220124025557p:plain

No. 10 : [[0.8300663]]

f:id:nokixa:20220124025559p:plain

No. 11 : [[0.99219877]]

f:id:nokixa:20220124025602p:plain

No. 12 : [[0.78739446]]

f:id:nokixa:20220124025604p:plain

No. 13 : [[0.879098]]

f:id:nokixa:20220124025606p:plain

No. 14 : [[0.7555138]]

f:id:nokixa:20220124025608p:plain

No. 15 : [[0.8576322]]

f:id:nokixa:20220124025611p:plain

No. 16 : [[0.8737255]]

f:id:nokixa:20220124025613p:plain

No. 17 : [[0.86299753]]

f:id:nokixa:20220124025615p:plain

No. 18 : [[0.8445317]]

f:id:nokixa:20220124025618p:plain

No. 19 : [[0.8447284]]

f:id:nokixa:20220124025620p:plain

だいたい思った通りにできました。
"0"の文字を楕円近似した結果を見ると、かなり一致度が高くなっています。

少し改善

年によって文字のフォントが違う、ということがあったので、この結果も年によって安定しないかも。
ということで考えたのは、

  • 楕円近似はするが、その後、楕円のパラメータを元に"0"のテンプレートと同じような見え方になるよう補正(縦横サイズ、角度)し、"0"のテンプレートと比較する

というやり方です。

まずは"0"のテンプレートで、楕円のフィッティングをしてから垂直になるように回転させます。
回転させる際、画像サイズを少し大きめに確保しておく必要があります。

ellipse_2019_zero = cv2.fitEllipse(ctrs1_numbers[0])
print(ellipse_2019_zero)
((645.8063354492188, 285.6666564941406), (38.72261047363281, 54.549617767333984), 167.13916015625)
bound_2019_zero = cv2.boundingRect(ctrs1_numbers[0])
ellipse_2019_zero_w = math.ceil(ellipse_2019_zero[1][0])
ellipse_2019_zero_h = math.ceil(ellipse_2019_zero[1][1])
origin_2019_zero_x = bound_2019_zero[0] - (int((ellipse_2019_zero_w - bound_2019_zero[2])/2.0))
origin_2019_zero_y = bound_2019_zero[1] - (int((ellipse_2019_zero_h - bound_2019_zero[3])/2.0))
print(bound_2019_zero)
print(origin_2019_zero_x, origin_2019_zero_y)
(626, 259, 41, 54)
627 259
subimg_2019_zero = np.zeros((bound_2019_zero[3], bound_2019_zero[2]), 'uint8')
ctr = ctrs1_numbers[0] - bound_2019_zero[0:2]
subimg_2019_zero = cv2.drawContours(subimg_2019_zero, [ctr], -1, 255,-1)
Mrot = cv2.getRotationMatrix2D((bound_2019_zero[2]/2.0, bound_2019_zero[3]/2.0), ellipse_2019_zero[2], 1)
Mrot[0,2] += (int((ellipse_2019_zero_w - bound_2019_zero[2])/2.0))
Mrot[1,2] += (int((ellipse_2019_zero_h - bound_2019_zero[3])/2.0))
subimg_2019_zero = cv2.warpAffine(subimg_2019_zero, Mrot, dsize=(ellipse_2019_zero_w, ellipse_2019_zero_h), flags=cv2.INTER_NEAREST)
plt.imshow(subimg_2019_zero, cmap='gray'), plt.title('Template(zero)'), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220124025623p:plain

このテンプレート画像に対して、各輪郭で比較を行ってみます。

def compare_to_template_zero(ctr):
    ellipse = cv2.fitEllipse(ctr)
    bound = cv2.boundingRect(ctr)
    ellipse_w = math.ceil(ellipse[1][0])
    ellipse_h = math.ceil(ellipse[1][1])
    origin_x = bound[0] - (int((ellipse_w - bound[2])/2.0))
    origin_y = bound[1] - (int((ellipse_h - bound[3])/2.0))
    subimg = np.zeros((bound[3], bound[2]), 'uint8')
    ctr = ctr - bound[0:2]
    subimg = cv2.drawContours(subimg, [ctr], -1, 255,-1)
    Mrot = cv2.getRotationMatrix2D((bound[2]/2.0, bound[3]/2.0), ellipse[2], 1)
    Mrot[0,2] += (int((ellipse_w - bound[2])/2.0))
    Mrot[1,2] += (int((ellipse_h - bound[3])/2.0))
    subimg = cv2.warpAffine(subimg, Mrot, dsize=(ellipse_w, ellipse_h), flags=cv2.INTER_NEAREST)
    subimg = cv2.resize(subimg, dsize=(ellipse_2019_zero_w, ellipse_2019_zero_h), interpolation=cv2.INTER_NEAREST)
    degree = cv2.matchTemplate(subimg.copy(), subimg_2019_zero, cv2.TM_CCORR_NORMED)
    return degree, subimg
plt.figure(figsize=(20,15), dpi=100)
for i, ctr in enumerate(ctrs1[0:20]):
    deg, subimg = compare_to_template_zero(ctr)
    title = 'No. %d : %lf' %(i,deg[0,0])
    plt.subplot(4,5,i+1), plt.imshow(subimg, cmap='gray'), plt.title(title), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220124025625p:plain

これもいい結果になっています。
"0"の文字では0.96程度になっていて、それ以外では0.9を超えているものはありません。
閾値を0.92とかぐらいに設定すれば判定できそうです。 "0"の検出方法はこれでいいかな。

以上

今回の内容はここまでにします。
今回はまだ2019年の画像1枚だけでやっただけなので、次回他の画像でも評価をしていきたいと思います。

参考

参考にしたサイトを載せておきます。

OpenCVやってみる - 33. アフィン変換行列推定、"0"以外の文字比較

前回の続きです。
今回は実際に文字テンプレート - 比較対象輪郭間のアフィン変換行列の推定、比較を行ってみたいと思います。

下準備

今まで通りの画像読み込み、下処理です。

import cv2
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import math

img1 = cv2.imread('harupan_190428_1.jpg')
img2 = cv2.imread('harupan_190428_2.jpg')
img3 = cv2.imread('harupan_200317_1.jpg')
img4 = cv2.imread('harupan_210227_2.jpg')
img5 = cv2.imread('harupan_210402_1.jpg')
img6 = cv2.imread('harupan_210402_2.jpg')
img7 = cv2.imread('harupan_210414_1.jpg')

f:id:nokixa:20211121023052p:plain

アフィン変換パラメータの推定

アフィン変換ではパラメータ推定のために2画像のマッチング点を最低3組与える必要があります。
今まで使ったSIFTなどの特徴量検出を使ってもいいかもしれませんが、今回は輪郭データが得られているので、これに含まれる座標が使えないか?
前回の点数文字輪郭で試してみたいと思います。

輪郭データの確認

前回少し見てみましたが、輪郭データは輪郭上の点の座標のリストになっていました。
実際の点数文字輪郭で確認してみます。

# image: Input image, BGR format
def calculate_harupan(image, debug):
    h, w, chs = image.shape
    if h > 800 or w > 800:
        k = 800.0/h if w > h else 800.0/w
    else:
        k = 1.0
    img = cv2.resize(image, None, fx=k, fy=k, interpolation=cv2.INTER_AREA)
    if debug:
        print('Resized to ', img.shape)
    hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
    # Convert hue value (rotation, mask by saturation)
    hsv[:,:,0] = np.where(hsv[:,:,0] < 50, hsv[:,:,0]+180, hsv[:,:,0])
    hsv[:,:,0] = np.where(hsv[:,:,1] < 100, 0, hsv[:,:,0])
    # Thresholding with cv2.inRange()
    th_hue = cv2.inRange(hsv[:,:,0], 135, 190)
    contours, hierarchy = cv2.findContours(th_hue, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
    indices0 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] == -1]
    indices1 = [i for i,hier in enumerate(hierarchy[0,:,:]) if hier[3] in indices0]
    if debug:
        print('Number of contours: ', len(contours))
        print('Number of indices0: ', len(indices0), 'indices1: ', len(indices1))
    contours1 = [contours[i] for i in indices1]
    contours1_filtered = [ctr for ctr in contours1 if cv2.contourArea(ctr) > 800*800/4000]
    if debug:
        return contours1_filtered, img
    else:
        return contours1_filtered
ctrs1, img1_resize = calculate_harupan(img1, True)

idx_zero = 26; ctrs1_zero = ctrs1[idx_zero]
idx_one = 27; ctrs1_one = ctrs1[idx_one]
idx_two = 24; ctrs1_two = ctrs1[idx_two]
idx_three = 33; ctrs1_three = ctrs1[idx_three]
idx_five = 35; ctrs1_five = ctrs1[idx_five]
ctrs1_numbers = [ctrs1_zero, ctrs1_one, ctrs1_two, ctrs1_three, ctrs1_five]
[print(ctr.shape[0]) for ctr in ctrs1_numbers];
Resized to  (1067, 800, 3)
Number of contours:  2514
Number of indices0:  1448 indices1:  875
62
38
99
94
67

いずれも100未満の座標の数になっています。
画像上にこれらの座標を示してみたいと思います。ここではcv2.drawMarker()を使ってみました。

def create_contour_area_image(img, ctr):
    x,y,w,h = cv2.boundingRect(ctr)
    rtn_img = img[y:y+h,x:x+w,:].copy()
    rtn_ctr = ctr.copy()
    origin = np.array([x,y])
    for c in rtn_ctr:
        c[0,:] -= origin
    return rtn_img, rtn_ctr

plt.figure(figsize=(6.4,4.8), dpi=100)
for i,ctr in enumerate(ctrs1_numbers):
    subimg, subctr = create_contour_area_image(img1_resize, ctr)
    [cv2.drawMarker(subimg, p, (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3) for p in subctr[:,0,:]];
    plt.subplot(1,5,1+i), plt.imshow(cv2.cvtColor(subimg, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220124001042p:plain

文字の角やカーブしている部分に輪郭の点が多く見られるのと、直線部分でも斜めになっていると点が存在しています。

パラメータ推定方法

アフィン変換パラメータを推定するには、テンプレート画像上の点と比較対象画像上の点の対応付けをしないといけません。どうしようかと考えましたが、前にも見たケンブリッジ大の教科書に載っていたICPという手法が使えそうでした。

http://www.computervisionmodels.com/
https://www.amazon.co.jp/Computer-Vision-Models-Learning-Inference/dp/1107011795

ICP(iterative closest point)

上記教科書Chapter 17 "Models for shape"の中で紹介されていました。
この中でも、"17.3 Shape templates"の節で、ここでは検出したい形状のテンプレートがあり、また、対象画像上にはそれが変換を施されて現れるので、この変換を求める、というのがテーマになっています。まさに今やろうとしているのと同じ内容です。

ICPアルゴリズムの手順は、

  1. 変換パラメータ\psiの初期値を設定
  2. テンプレート上の点(landmark point)w_n (n=1...N)\psiで変換、w'_nを得る
  3. w'_nと対象画像上の点y_nについて、最も近いものを対応付ける
  4. w_ny_nの対応付けから、変換パラメータ\psiを算出
  5. 2.に戻る、これを収束するまで(対応付けが変わらなくなるまで?)繰り返す

というものです。

OpenCVではppf_match_3dの中にICPクラスがありますが、これは3次元の点群向けのようなので、今回は自作してみたいと思います。

https://docs.opencv.org/3.4/dc/d9b/classcv_1_1ppf__match__3d_1_1ICP.html

"0"の文字の検出について

ICPアルゴリズムを使っていきたいと思いますが、"0"の文字だけちょっと問題が。
他の数字では角がありますが、"0"だけ楕円形状をしていて、輪郭点が必ずしも同じ位置に現れない可能性があります。また、楕円だとアフィン変換をしても結局楕円になる、ということもあるので、以下のような手法を考えてみます。

  • 楕円で近似、近似した楕円とテンプレートマッチングを実施、一致度が閾値以上であれば"0"の文字であると判定

OpenCVのアフィン変換パラメータ推定関数

OpenCVでのアフィン変換パラメータ推定ですが、以下の2つの関数があります。

調べていてestimateRigidTransform()というのもありましたが、こちらは既に非推奨になっていて、今使っているバージョン(4.5.3)ではなくなっていました。
http://opencv.jp/opencv-2svn/cpp/structural_analysis_and_shape_descriptors.html#cv-estimaterigidtransform
https://campkougaku.com/2020/07/16/estimateaffine2d/

今回は1つ目だと意味がない(3組だと、その3組がきっちりマッチングする変換行列が返ってくるだけ)ので、2つ目を使おうと思いますが、今回対応点のマッチングに当たる部分は自分で用意するので、RANSAC等が動くのは余分かなと。

単純に3組より多い対応点から最適な変換行列を求める、ということであれば、最小二乗誤差になる変換行列、という条件で、これはClosed formの解があります。このあたりも上記の教科書に書いてあります。

4組以上の対応点で最小二乗誤差を取るアフィン変換パラメータの計算

ここでは、教科書にならってアフィン変換行列を


\begin{bmatrix} \boldsymbol{\Phi} & \boldsymbol{\tau} \end{bmatrix} = 
\begin{bmatrix} \phi_{11} & \phi_{12} & \tau_x \\ \phi_{21} & \phi_{22} & \tau_y \end{bmatrix}

と書きます。
対応点群のうち変換元を\boldsymbol{w}_i=[u_i v_i \rbrack ^T (i=1...N)、変換先を\boldsymbol{x}_i=[x_i y_i \rbrack ^Tとして、 行列\boldsymbol{A}_i、ベクトル\boldsymbol{b}


\boldsymbol{A}_i = 
\begin{bmatrix} u_i & v_i & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & u_i & v_i & 1 \end{bmatrix} \\
\boldsymbol{b} = \begin{bmatrix} \phi_{11} & \phi_{12} & \tau_x & \phi_{21} & \phi_{22} & \tau_y \end{bmatrix} ^T

とします。\boldsymbol{b}は変換パラメータを要素に持つベクトルとなり、最適値\boldsymbol{\hat{b}}を求めることとなります。


\boldsymbol{\hat{b}} = 
    \underset{\boldsymbol{b}}{\rm{argmin}} \lbrack \sum_{i=1}^N (\boldsymbol{x}_i - \boldsymbol{A}_i\boldsymbol{b})^T (\boldsymbol{x}_i - \boldsymbol{A}_i\boldsymbol{b}) \rbrack

行列\boldsymbol{A}、ベクトル\boldsymbol{x}


\boldsymbol{A} = 
\begin{bmatrix}
u_1 & v_1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & u_1 & v_1 & 1 \\
u_2 & v_2 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & u_2 & v_2 & 1 \\
\vdots & & & & & \\
u_N & v_N & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & u_N & v_N & 1 \\
\end{bmatrix} \\
    \boldsymbol{x} = \begin{bmatrix} x_1 & y_1 & x_2 & y_2 & \cdots & x_N & y_N \end{bmatrix}^T

とすると、先ほどの最適値\boldsymbol{\hat{b}}は、


\boldsymbol{\hat{b}} = (\boldsymbol{A}^T \boldsymbol{A})^{-1}\boldsymbol{A}^T \boldsymbol{x}

となります。

こちらのサイトにも同様なことが書かれていました。
https://tukurutanoshi.hateblo.jp/entry/2019/02/27/165340

以下、詳細を検討していきます。

ICPで"0"以外の変換パラメータ推定

landmark pointの選択

まずは各文字テンプレートの輪郭点から、landmark pointとして使える角の点を選びます。
(輪郭の直線、もしくは緩いカーブだと、必ず同じところに輪郭点が現れるとは限らないので)
前回のトラックバーを使うと探しやすいと思うので、使います。
また、今後の利用のため、輪郭周辺のみの画像と、原点をこれに合わせた輪郭データを残しておきます。

from ipywidgets import interact
subimgs1_numbers = []
subctrs1_numbers = []
for ctr in ctrs1_numbers:
    subimg, subctr = create_contour_area_image(img1_resize, ctr)
    subimgs1_numbers += [subimg]
    subctrs1_numbers += [subctr]
def plot_contour_point(img, ctr, i_point):
    img_copy = img.copy()
    cv2.drawMarker(img_copy, ctr[i_point,0,:], (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3);
    plt.imshow(cv2.cvtColor(img_copy, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.show()

def plot_contour_point_one(i_point):
    plot_contour_point(subimgs1_numbers[1], subctrs1_numbers[1], i_point)

def plot_contour_point_two(i_point):
    plot_contour_point(subimgs1_numbers[2], subctrs1_numbers[2], i_point)
    
def plot_contour_point_three(i_point):
    plot_contour_point(subimgs1_numbers[3], subctrs1_numbers[3], i_point)
    
def plot_contour_point_five(i_point):
    plot_contour_point(subimgs1_numbers[4], subctrs1_numbers[4], i_point)
interact(plot_contour_point_one, i_point=(0, ctrs1_numbers[1].shape[0]-1));

f:id:nokixa:20220124005014p:plain

subctrs1_one = subctrs1_numbers[1]
pts1_one_idx = [0, 6, 18, 23, 31, 34]
pts1_one = np.zeros([len(pts1_one_idx),2])
for i,idx in enumerate(pts1_one_idx):
    pts1_one[i,:] = subctrs1_one[idx,0,:].copy()
interact(plot_contour_point_two, i_point=(0, ctrs1_numbers[2].shape[0]-1));

f:id:nokixa:20220124005016p:plain

subctrs1_two = subctrs1_numbers[2]
pts1_two_idx = [29, 34, 39, 52, 84, 88]
pts1_two = np.zeros([len(pts1_two_idx),2])
for i,idx in enumerate(pts1_two_idx):
    pts1_two[i,:] = subctrs1_two[idx,0,:].copy()
interact(plot_contour_point_three, i_point=(0, ctrs1_numbers[3].shape[0]-1));

f:id:nokixa:20220124005018p:plain

subctrs1_three = subctrs1_numbers[3]
pts1_three_idx = [13, 48, 49, 62, 63, 79, 80]
pts1_three = np.zeros([len(pts1_three_idx),2])
for i,idx in enumerate(pts1_three_idx):
    pts1_three[i,:] = subctrs1_three[idx,0,:].copy()
interact(plot_contour_point_five, i_point=(0, ctrs1_numbers[4].shape[0]-1));

f:id:nokixa:20220124005020p:plain

subctrs1_five = subctrs1_numbers[4]
pts1_five_idx = [2, 4, 8, 9, 35, 36, 58, 63]
pts1_five = np.zeros([len(pts1_five_idx),2])
for i,idx in enumerate(pts1_five_idx):
    pts1_five[i,:] = subctrs1_five[idx,0,:].copy()

選んだので、まとめて表示してみます。
また、今回のやり方では、テンプレート画像について前回やったような角度調整は不要なので、角度調整なしのものを用意しておきます。

pts1_numbers = [pts1_one, pts1_two, pts1_three, pts1_five]
ctrs1_templates = []
plt.figure(figsize=(6.4,4.8), dpi=100)
for i,ctr in enumerate(subctrs1_numbers[1:5]):
    img = subimgs1_numbers[i+1]
    for p in pts1_numbers[i]:
        img = cv2.drawMarker(img, p.astype('uint'), (0,255,0), markerType=cv2.MARKER_CROSS, markerSize=3)
    template = np.zeros((img.shape[0], img.shape[1]), 'uint8')
    cv2.drawContours(template, [ctr], -1, 255, -1)
    ctrs1_templates += [template]
    plt.subplot(2,4,1+i), plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)), plt.xticks([]), plt.yticks([])
    plt.subplot(2,4,5+i), plt.imshow(template, cmap='gray'), plt.xticks([]), plt.yticks([])
plt.show()

f:id:nokixa:20220124001045p:plain

ICPアルゴリズム実装

まず変換行列の初期値が必要になります。
今回は

  • 対象画像の外接矩形(回転考慮)をテンプレートの外接矩形(回転考慮)に合わせるような変換を初期値とする

というので考えました。この変換だけでももしかしたら十分かも?
外接矩形だと90°単位の回転も考えておいたほうがいいかも。 後は上で書いた通り実装するだけです。

# pts: list of 2D points, or ndarray of shape (n,2)
# query: 2D point to find nearest neighbor
def find_nearest_neighbor(pts, query):
    min_distance = float('inf')
    min_idx = 0
    for i, p in enumerate(pts):
        d = np.linalg.norm(query - p)
        if(d < min_distance):
            min_distance = d
            min_idx = i
    return min_idx, min_distance

def get_initial_transform(src_ctr, dst_ctr):
    src_box = cv2.boxPoints(cv2.minAreaRect(src_ctr))
    dst_box = cv2.boxPoints(cv2.minAreaRect(dst_ctr))
    # Rotated patterns are created when starting index is slided
    dst_box = np.vstack([dst_box, dst_box])
    # Area of converted image
    dst_rect = cv2.boundingRect(dst_ctr)
    
    src_pts = [p for p in src_ctr[:,0,:]]
    dst_pts = [p for p in dst_ctr[:,0,:]]
    min_sum_distance = float('inf')
    for i in range(4):
        M = cv2.getAffineTransform(src_box[0:3], dst_box[i:i+3])
        sum_distance = 0
        for p in src_pts:
            p2 = M @ np.array([p[0], p[1], 1])
            idx, d = find_nearest_neighbor(dst_pts, p2)
            sum_distance += d
        if(sum_distance < min_sum_distance):
            M_rtn = M
            min_sum_distance = sum_distance
    return M_rtn

変換行列初期値を試しに出してみます。 まずは"1"のテンプレートとの比較から。

for i, ctr in enumerate(ctrs1[0:20]):
    subimg, subctr = create_contour_area_image(img1_resize, ctr)
    M = get_initial_transform(subctrs1_numbers[1], subctr)
    converted_img = cv2.warpAffine(subimgs1_numbers[1], M, (subimg.shape[1], subimg.shape[0]))
    plt.figure(figsize=(3.2,2.4), dpi=100)
    print('No. ', i)
    plt.subplot(1,2,1), plt.imshow(cv2.cvtColor(converted_img, cv2.COLOR_BGR2RGB)), plt.title('Template'), plt.xticks([]), plt.yticks([])
    plt.subplot(1,2,2), plt.imshow(cv2.cvtColor(subimg, cv2.COLOR_BGR2RGB)), plt.title('Target'), plt.xticks([]), plt.yticks([])
    plt.show()

No. 0

f:id:nokixa:20220124001047p:plain

No. 1

f:id:nokixa:20220124001050p:plain

No. 2

f:id:nokixa:20220124001052p:plain

No. 3

f:id:nokixa:20220124001055p:plain

No. 4

f:id:nokixa:20220124001057p:plain

No. 5

f:id:nokixa:20220124001059p:plain

No. 6

f:id:nokixa:20220124001102p:plain

No. 7

f:id:nokixa:20220124001104p:plain

No. 8

f:id:nokixa:20220124001107p:plain

No. 9

f:id:nokixa:20220124001109p:plain

No. 10

f:id:nokixa:20220124001112p:plain

No. 11

f:id:nokixa:20220124001114p:plain

No. 12

f:id:nokixa:20220124001116p:plain

No. 13

f:id:nokixa:20220124001119p:plain

No. 14

f:id:nokixa:20220124001121p:plain

No. 15

f:id:nokixa:20220124001123p:plain

No. 16

f:id:nokixa:20220124001126p:plain

No. 17

f:id:nokixa:20220124001128p:plain

No. 18

f:id:nokixa:20220124001130p:plain

No. 19

f:id:nokixa:20220124001132p:plain

この初期値だけで"1"の文字はきちんとチェックできそうですが、同じ撮影角度からなのでまあそうなるんだろうか。
次はICPを実装、試してみます。

気になったのは、最近傍点がかぶって、対応点の重複が起きてしまったらどうなるかというところ。
アフィン変換行列を計算するときの行列\boldsymbol{A}に対応点座標が入ってきますが、逆行列の計算があるので、変なことが起こってしまいそうな気がします。

ということで、これを避けるように実装します。

ついでにテンプレートマッチングも行って、一致度を見てみます。ICPの効果の確認のため、初期推定行列での一致度も見てみます。
cv2.matchTemplate()では、比較方法としてcv2.TM_CCORR_NORMEDを使おうと思います。結果の最大値は1.0で、2つの画像が完全一致したときにその値になります。
あともう一つ、matchShapes()関数でも形状比較ができるようなので、やってみます。 こちらでは値が小さいほど一致度が高いということです。

https://docs.opencv.org/4.5.5/d5/d45/tutorial_py_contours_more_functions.html

# src, dst: ndarray, shape is (n,2) (n: number of points)
def estimate_affine_2d(src, dst):
    n = min(src.shape[0], dst.shape[0])
    x = dst[0:n].flatten()
    A = np.zeros((2*n,6))
    for i in range(n):
        A[i*2,0] = src[i,0]
        A[i*2,1] = src[i,1]
        A[i*2,2] = 1
        A[i*2+1,3] = src[i,0]
        A[i*2+1,4] = src[i,1]
        A[i*2+1,5] = 1
    M = np.linalg.inv(A.T @ A) @ A.T @ x
    return M.reshape([2,3])

# Find optimum affine matrix using ICP algorithm
# src_pts: ndarray, shape is (n_s,2) (n_s: number of points)
# dst_pts: ndarray, shape is (n_d,2) (n_d: number of points, n_d should be larger or equal to n_s)
# initial_matrix: ndarray, shape is (2,3)
def icp(src_pts, dst_pts, max_iter=1000, initial_matrix=np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])):
    default_affine_matrix = np.array([[1.0, 0.0, 0.0], [0.0, 1.0, 0.0]])
    if dst_pts.shape[0] < src_pts.shape[0]:
        print("icp: Insufficient destination points")
        return default_affine_matrix
    if initial_matrix.shape != (2,3):
        print("icp: Illegal shape of initial_matrix")
        return default_affine_matrix
    M = initial_matrix
    # Store indices of the nearest neighbor point of dst_pts to the converted point of src_pts
    nn_idx = []
    for i in range(max_iter):
        nn_idx_tmp = []
        dst_pts_list = [p for p in dst_pts]
        idx_list = list(range(0,dst_pts.shape[0]))
        for p in src_pts:
            p2 = M @ np.array([p[0], p[1], 1])
            idx, d = find_nearest_neighbor(dst_pts_list, p2)
            nn_idx_tmp += [idx_list[idx]]
            del dst_pts_list[idx]
            del idx_list[idx]
        if __debug__:
            print("icp: nn_idx: ", nn_idx_tmp)
        if nn_idx != [] and nn_idx == nn_idx_tmp:
            if __debug__:
                print("icp: converged in ", i, " iteration(s)")
            break
        dst_pts2 = np.zeros_like(src_pts)
        for j,idx in enumerate(nn_idx_tmp):
            dst_pts2[j,:] = dst_pts[idx,:]
        M = estimate_affine_2d(src_pts, dst_pts2)
        nn_idx = nn_idx_tmp
        if i == max_iter -1:
            print("icp: Not converged")
    return M
binimg1_one = np.zeros_like(subimgs1_numbers[1][:,:,0])
binimg1_one = cv2.drawContours(binimg1_one, [subctrs1_one], -1, 255, -1)

for i, ctr in enumerate(ctrs1[0:20]):
    print("-- No. ", i, " --")
    subimg, subctr = create_contour_area_image(img1_resize, ctr)
    binimg = np.zeros_like(subimg[:,:,0])
    pts = np.zeros((subctr.shape[0], 2))
    for i,p in enumerate(subctr[:,0,:]):
        pts[i] = p
    binimg = cv2.drawContours(binimg, [subctr], -1, 255, -1)
    M_init = get_initial_transform(subctrs1_one, subctr)
    M = icp(pts1_one, pts, max_iter=100, initial_matrix=M_init)
    print("Affine matrix: ")
    print(M)
    subimg_one_converted = cv2.warpAffine(subimgs1_numbers[1], M, (subimg.shape[1],subimg.shape[0]))
    subctr_one_converted = np.zeros_like(subctrs1_one)
    subctr_one_converted_init = np.zeros_like(subctrs1_one)
    for i in range(subctrs1_one.shape[0]):
        subctr_one_converted[i,0,:] = (M[:,0:2] @ subctrs1_one[i,0,:]) + M[:,2]
        subctr_one_converted_init[i,0,:] = (M_init[:,0:2] @ subctrs1_one[i,0,:]) + M_init[:,2]
    binimg_one = np.zeros_like(subimg[:,:,0])
    binimg_one = cv2.drawContours(binimg_one, [subctr_one_converted], -1, 255, -1)
    binimg_one_init = np.zeros_like(subimg[:,:,0])
    binimg_one_init = cv2.drawContours(binimg_one_init, [subctr_one_converted_init], -1, 255, -1)
    similarity1 = cv2.matchTemplate(binimg.copy(), binimg_one, cv2.TM_CCORR_NORMED)
    similarity1_init = cv2.matchTemplate(binimg.copy(), binimg_one_init, cv2.TM_CCORR_NORMED)
    similarity2 = cv2.matchShapes(subctr, subctr_one_converted, cv2.CONTOURS_MATCH_I2, 0.0)
    print("similarity1: ", similarity1, "(", similarity1_init, " with initial matrix)", ", similarity2: ", similarity2)
    plt.figure(figsize=(6.4,2.4), dpi=100)
    plt.subplot(1,4,1), plt.imshow(cv2.cvtColor(subimg_one_converted, cv2.COLOR_BGR2RGB)), plt.title('Template'), plt.xticks([]), plt.yticks([])
    plt.subplot(1,4,2), plt.imshow(cv2.cvtColor(subimg, cv2.COLOR_BGR2RGB)), plt.title('Target'), plt.xticks([]), plt.yticks([])
    plt.subplot(1,4,3), plt.imshow(binimg_one, cmap='gray'), plt.title('Template'), plt.xticks([]), plt.yticks([])
    plt.subplot(1,4,4), plt.imshow(binimg, cmap='gray'), plt.title('Target'), plt.xticks([]), plt.yticks([])
    plt.show()
-- No.  0  --
icp: nn_idx:  [24, 5, 13, 15, 2, 23]
icp: nn_idx:  [24, 5, 12, 15, 2, 23]
icp: nn_idx:  [24, 5, 12, 15, 2, 23]
icp: converged in  2  iteration(s)
Affine matrix: 
[[ 3.77521764  0.35939223 -1.62896427]
 [-0.11927385  0.98120093 -0.4825973 ]]
similarity1:  [[0.83907473]] ( [[0.8470252]]  with initial matrix) , similarity2:  2.070841343730711

f:id:nokixa:20220124001135p:plain

-- No.  1  --
icp: nn_idx:  [5, 11, 13, 19, 4, 3]
icp: nn_idx:  [5, 11, 13, 19, 4, 3]
icp: converged in  1  iteration(s)
Affine matrix: 
[[ 0.09102535 -0.21797439 11.83687946]
 [ 0.62207725  0.06364548 -0.50776546]]
similarity1:  [[0.7998894]] ( [[0.83268374]]  with initial matrix) , similarity2:  2.069956761410079

f:id:nokixa:20220124001137p:plain

-- No.  2  --
icp: nn_idx:  [24, 6, 10, 12, 21, 23]
icp: nn_idx:  [24, 6, 11, 12, 21, 23]
icp: nn_idx:  [24, 6, 11, 12, 21, 23]
icp: converged in  2  iteration(s)
Affine matrix: 
[[ 0.63722433  0.05213959 -0.56842827]
 [-0.01010113  0.28285502  0.67956106]]
similarity1:  [[0.85370934]] ( [[0.8228804]]  with initial matrix) , similarity2:  3.097868971398144

f:id:nokixa:20220124001139p:plain

-- No.  3  --
icp: nn_idx:  [10, 14, 25, 30, 11, 9]
icp: nn_idx:  [10, 14, 25, 30, 11, 8]
icp: nn_idx:  [10, 14, 25, 30, 11, 8]
icp: converged in  2  iteration(s)
Affine matrix: 
[[ 0.11168581 -0.27227208 13.14478209]
 [ 0.63847995  0.04773183 -0.98538065]]
similarity1:  [[0.8774381]] ( [[0.8368629]]  with initial matrix) , similarity2:  2.0255038136645815

f:id:nokixa:20220124001142p:plain

-- No.  4  --
icp: nn_idx:  [8, 16, 23, 31, 9, 7]
icp: nn_idx:  [8, 16, 23, 31, 9, 7]
icp: converged in  1  iteration(s)
Affine matrix: 
[[-1.06272935e-02 -2.96027105e-01  1.52971042e+01]
 [ 6.03319977e-01  3.05329968e-02  7.12554116e-01]]
similarity1:  [[0.8454327]] ( [[0.85770833]]  with initial matrix) , similarity2:  3.1988653569571213

f:id:nokixa:20220124001144p:plain

-- No.  5  --
icp: nn_idx:  [9, 13, 16, 3, 12, 8]
icp: nn_idx:  [9, 13, 16, 4, 12, 8]
icp: nn_idx:  [9, 13, 16, 4, 12, 8]
icp: converged in  2  iteration(s)
Affine matrix: 
[[-5.36774428e-01 -6.36147697e-02  1.40442789e+01]
 [-5.81606616e-03 -2.55624851e-01  1.52168842e+01]]
similarity1:  [[0.8033988]] ( [[0.81920767]]  with initial matrix) , similarity2:  3.95210735229086

f:id:nokixa:20220124001146p:plain

-- No.  6  --
icp: nn_idx:  [7, 27, 39, 63, 18, 4]
icp: nn_idx:  [6, 27, 39, 63, 17, 4]
icp: nn_idx:  [5, 27, 38, 63, 17, 4]
icp: nn_idx:  [5, 27, 38, 63, 17, 4]
icp: converged in  3  iteration(s)
Affine matrix: 
[[-0.7778685  -0.35529497 34.39894773]
 [ 0.91020511 -0.16915753  8.47497258]]
similarity1:  [[0.71735805]] ( [[0.7633234]]  with initial matrix) , similarity2:  1.88512766720494

f:id:nokixa:20220124001149p:plain

-- No.  7  --
icp: nn_idx:  [1, 14, 33, 40, 0, 57]
icp: nn_idx:  [1, 13, 33, 42, 0, 54]
icp: nn_idx:  [1, 12, 33, 42, 0, 53]
icp: nn_idx:  [1, 11, 33, 42, 0, 53]
icp: nn_idx:  [1, 11, 33, 42, 0, 53]
icp: converged in  4  iteration(s)
Affine matrix: 
[[ 1.09282845 -0.47169988 21.60848193]
 [ 0.57396     0.71953624 -5.82698916]]
similarity1:  [[0.7672092]] ( [[0.8546155]]  with initial matrix) , similarity2:  0.9234014436533062

f:id:nokixa:20220124001151p:plain

-- No.  8  --
icp: nn_idx:  [41, 0, 6, 25, 50, 37]
icp: nn_idx:  [40, 0, 6, 25, 50, 37]
icp: nn_idx:  [40, 0, 6, 25, 50, 36]
icp: nn_idx:  [40, 0, 6, 25, 50, 35]
icp: nn_idx:  [40, 0, 6, 25, 50, 35]
icp: converged in  4  iteration(s)
Affine matrix: 
[[ 3.15292202e-01  4.29651332e-01  4.86435813e-01]
 [-1.34699268e+00 -1.62861226e-02  3.10887500e+01]]
similarity1:  [[0.74493146]] ( [[0.74549276]]  with initial matrix) , similarity2:  1.6186110812682086

f:id:nokixa:20220124001153p:plain

-- No.  9  --
icp: nn_idx:  [46, 5, 21, 25, 39, 42]
icp: nn_idx:  [46, 5, 21, 25, 39, 42]
icp: converged in  1  iteration(s)
Affine matrix: 
[[ 0.80616591 -0.05393659  0.46875474]
 [-0.03735035  0.95663612  0.44759342]]
similarity1:  [[0.9512335]] ( [[0.97588533]]  with initial matrix) , similarity2:  0.1421256423484834

f:id:nokixa:20220124001156p:plain

-- No.  10  --
icp: nn_idx:  [32, 42, 1, 8, 53, 31]
icp: nn_idx:  [32, 42, 1, 8, 54, 30]
icp: nn_idx:  [32, 42, 1, 8, 54, 30]
icp: converged in  2  iteration(s)
Affine matrix: 
[[-1.04624547  0.17536172 23.32801419]
 [-0.22475023 -0.54176223 31.41222332]]
similarity1:  [[0.7661245]] ( [[0.789921]]  with initial matrix) , similarity2:  0.5005787273262177

f:id:nokixa:20220124001158p:plain

-- No.  11  --
icp: nn_idx:  [29, 43, 63, 7, 30, 26]
icp: nn_idx:  [29, 41, 63, 8, 27, 26]
icp: nn_idx:  [29, 40, 64, 8, 27, 25]
icp: nn_idx:  [29, 39, 64, 8, 27, 25]
icp: nn_idx:  [29, 39, 64, 8, 27, 25]
icp: converged in  4  iteration(s)
Affine matrix: 
[[-1.12407669e+00  6.83733951e-02  3.36880211e+01]
 [-2.44519307e-02 -9.50621868e-01  5.19999215e+01]]
similarity1:  [[0.7463398]] ( [[0.80455464]]  with initial matrix) , similarity2:  1.2619150345061985

f:id:nokixa:20220124001200p:plain

-- No.  12  --
icp: nn_idx:  [36, 49, 88, 2, 24, 30]
icp: nn_idx:  [35, 49, 88, 3, 27, 30]
icp: nn_idx:  [35, 49, 88, 3, 27, 30]
icp: converged in  2  iteration(s)
Affine matrix: 
[[-1.56806219 -0.30534418 47.03348041]
 [ 0.47604111 -0.85860263 43.51690626]]
similarity1:  [[0.72022676]] ( [[0.6975103]]  with initial matrix) , similarity2:  1.3914075613092929

f:id:nokixa:20220124001204p:plain

-- No.  13  --
icp: nn_idx:  [0, 6, 16, 19, 28, 29]
icp: nn_idx:  [0, 5, 16, 20, 28, 29]
icp: nn_idx:  [0, 5, 16, 20, 28, 29]
icp: converged in  2  iteration(s)
Affine matrix: 
[[ 0.93756138  0.03949807 -0.76644805]
 [-0.03855138  0.96085223  0.36815656]]
similarity1:  [[0.95289135]] ( [[0.9058108]]  with initial matrix) , similarity2:  0.38096457366596537

f:id:nokixa:20220124001206p:plain

-- No.  14  --
icp: nn_idx:  [37, 58, 113, 1, 23, 31]
icp: nn_idx:  [37, 56, 113, 1, 24, 30]
icp: nn_idx:  [37, 56, 113, 1, 24, 30]
icp: converged in  2  iteration(s)
Affine matrix: 
[[-1.23288405 -0.48977163 47.8703411 ]
 [ 0.57970655 -0.84490737 41.08824725]]
similarity1:  [[0.6762236]] ( [[0.70227975]]  with initial matrix) , similarity2:  1.2474046314232978

f:id:nokixa:20220124001209p:plain

-- No.  15  --
icp: nn_idx:  [0, 2, 6, 17, 25, 24]
icp: nn_idx:  [0, 2, 5, 17, 25, 24]
icp: nn_idx:  [0, 2, 5, 17, 25, 24]
icp: converged in  2  iteration(s)
Affine matrix: 
[[ 0.70935123  0.04373579  2.2724023 ]
 [-0.07280603  0.357142    1.50096987]]
similarity1:  [[0.7443389]] ( [[0.8030085]]  with initial matrix) , similarity2:  1.5579271874548668

f:id:nokixa:20220124001211p:plain

-- No.  16  --
icp: nn_idx:  [20, 26, 45, 2, 35, 18]
icp: nn_idx:  [20, 26, 46, 2, 35, 18]
icp: nn_idx:  [20, 26, 46, 2, 35, 18]
icp: converged in  2  iteration(s)
Affine matrix: 
[[-0.63087244 -0.01974989 16.31276532]
 [ 0.06772852 -0.36275534 18.16710691]]
similarity1:  [[0.81694937]] ( [[0.76340884]]  with initial matrix) , similarity2:  1.640027037212645

f:id:nokixa:20220124001214p:plain

-- No.  17  --
icp: nn_idx:  [0, 12, 40, 49, 69, 73]
icp: nn_idx:  [0, 12, 40, 49, 69, 73]
icp: converged in  1  iteration(s)
Affine matrix: 
[[ 0.9943615  -0.16510235  4.43565989]
 [ 0.13401419  0.99049185 -0.04637385]]
similarity1:  [[0.9555899]] ( [[0.9649432]]  with initial matrix) , similarity2:  0.08709025018016892

f:id:nokixa:20220124001216p:plain

-- No.  18  --
icp: nn_idx:  [111, 18, 57, 70, 99, 104]
icp: nn_idx:  [111, 18, 57, 70, 98, 104]
icp: nn_idx:  [111, 18, 57, 70, 98, 104]
icp: converged in  2  iteration(s)
Affine matrix: 
[[ 0.95356303 -0.34740354 13.56925455]
 [ 0.32619587  0.91220825  0.67636115]]
similarity1:  [[0.95916927]] ( [[0.95294887]]  with initial matrix) , similarity2:  0.4385846009540594

f:id:nokixa:20220124001219p:plain

-- No.  19  --
icp: nn_idx:  [26, 35, 0, 6, 47, 25]
icp: nn_idx:  [26, 35, 0, 6, 48, 24]
icp: nn_idx:  [26, 35, 0, 6, 48, 24]
icp: converged in  2  iteration(s)
Affine matrix: 
[[-0.90189543  0.11394542 21.30119745]
 [-0.12584142 -0.56866086 31.567395  ]]
similarity1:  [[0.7807008]] ( [[0.7943264]]  with initial matrix) , similarity2:  0.7837848069205069

f:id:nokixa:20220124001221p:plain

__debug__を使ってデバッグ表示も入れてみました。
デフォルトでTrueとのことなので、今回は表示されています。

この結果を見た感じだと、

  • ICPアルゴリズムで点のマッチングが改善している様子が見られる
  • アフィン変換によるせん断変形が見られる(元のテンプレート画像で直角だった部分が斜めになっている)
  • ICPアルゴリズムで、初期推定パラメータから改善したかというと微妙…どちらかというと悪くなっている傾向があるような。
    もしかしたら解像度を上げたりすると改善するのかもと思っています。
  • テンプレートと対象輪郭の比較としては、cv2.matchTemplate()、というよりcv2.CCORR_NORMEDによる比較のほうが分かりやすそう、こちらは最大値が1と決まっているので
  • cv2.matchTemplate()での比較では、"1"の文字の検出がいい感じにできそう
    だいたい0.9ぐらいの閾値にすればよさそうです。
  • cv2.matchShapes()のほうは値の範囲がどれくらいかよくわからない、また、閾値も難しい

という感じです。
ひとまず"0"以外の文字については、アフィン変換行列をICPアルゴリズムで計算する、テンプレートとの一致度をcv2.matchTemplate()で比較する、という方針で進めようと思います。

また一区切り

あと"0"の文字の比較、判定が残っていますが、一区切りにしたいと思います。

OpenCVやってみる - 32. 輪郭の変形(考察のみ)

あけましておめでとうございます。
1月も半分以上過ぎてしまいましたが、今年初めての記事です。

今年も引き続き春のパン祭りシール点数集計をやっていきます。
去年の3月から始めて、色々寄り道しながらゆっくり進めてきましたが、 今年の春のパン祭りに間に合うように完成させたいな。

今回の内容

前回テンプレート画像を用意したので、今回は基本的にテンプレートマッチングを実施するだけです。
ただし、台紙を斜めから撮影することによる変形も考慮したいと思います。
以下進めていきます。

輪郭の変形

シール台紙をカメラで撮影するとき、人間がやるのであれば多少なりともカメラの角度がシール台紙の垂直軸からずれます。そうすると撮影された画像では点数文字がいずれかの方向につぶれて写ります。これを補正したいなと。

今回のように平面をカメラで撮影した場合、平面上の点から画像上の点への変換は、厳密には射影変換で表すことができます。ただし、射影変換のパラメータ推定をするのは計算量が多くなるし、そのために準備が必要なマッチング点ペアの数も多くなります。
カメラの撮像面が被写体平面に対してそれほど平行から外れていなければ、アフィン変換での近似で十分なので、これを試してみたいと思います。

一応この方向で考えますが、本当にアフィン変換での変形の補正が必要か、というのも確認しておきます。
そんなに斜めから撮影しないという前提にすれば、必要なさそうな気も…。
ただ、勉強ということで試しにやってみたいなと、そういうモチベーションでやっています。

アフィン変換参考
https://note.nkmk.me/python-opencv-warp-affine-perspective/

今回の変換適用について

テンプレート画像と検出した点数文字輪郭(の周辺画像)を比較する、というのが今回やろうと思っていることです。
テンプレート画像を基準として、これとは違う角度から撮影されて輪郭周辺画像が得られる、という見方になります。

  • この変換のパラメータを求める
  • 輪郭周辺画像に逆変換を施して、本来のカメラ角度からの画像を得る

ということを目指します。

アフィン変換の解釈?

アフィン変換は、2つの平面座標間を、2x2の行列と並進を作用させて変換します。

 \begin{bmatrix} x' \\ y' \\ \end{bmatrix}
= 
\begin{bmatrix} a & b \\ c & d \\ \end{bmatrix}
\begin{bmatrix} x \\ y \\ \end{bmatrix}
+
\begin{bmatrix} \tau_x \\ \tau_y \\ \end{bmatrix}
=
\boldsymbol{M}
\begin{bmatrix} x \\ y \\ \end{bmatrix}
+
\boldsymbol{\tau}

以降、並進\boldsymbol{\tau}は置いておいて、 行列\boldsymbol{M}のほうを考えます。

アフィン変換では、\boldsymbol{M}の要素に特に制約はありません(2次元座標変換には他にユークリッド変換、相似変換がありますが、変換行列に制約がある)が、この要素の意味合いを少し考えてみました。

一応考えましたが、もしかしたら間違いがあるかもなので、そこはご容赦です。あまり信用しないでください。

f:id:nokixa:20220117230423p:plain

まず、この図では、被写体平面に対して斜めの方向からカメラで撮影を行っている様子を表しています。
被写体平面に平行にx軸とy軸、垂直にz軸を定義しています。
ここで、カメラ軸を被写体平面上に射影した軸をx'軸、これに垂直な被写体平面内の軸をy'軸と定義します。
以下の図はz軸に垂直な視点から見た図です。

f:id:nokixa:20220117230426p:plain

x'軸とy'軸を使って被写体の座標を表すことを考えます。
上の図で、x'軸はx軸に対して\varphiの角度(角度をどう取るかはもう少し考えたほうがよさそう…)となっているので、x, yでの座標からx', y'での座標への変換は、

 \begin{bmatrix} x' \\ y' \end{bmatrix} 
=
\begin{bmatrix} \cos(-\varphi) & -\sin(-\varphi) \\ \sin(-\varphi) & \cos(-\varphi) \end{bmatrix} 
\begin{bmatrix} x \\ y \end{bmatrix}
=
\begin{bmatrix} \cos\varphi & \sin\varphi \\ -\sin\varphi & \cos\varphi \end{bmatrix} 
\begin{bmatrix} x \\ y \end{bmatrix} 
=
\boldsymbol{R_\varphi} \begin{bmatrix} x \\ y \end{bmatrix}

となります。

この被写体平面上の点が、カメラの撮像面上でどのように写るか考えてみます。
まずカメラ軸とx'軸を含む面で見てみます。

f:id:nokixa:20220117230429p:plain

カメラの撮像面に対して被写体平面が傾いていて、被写体平面上の点のx'軸座標と撮像面上の座標tex:uはシンプルな関係にはなりませんが、カメラの光学中心と被写体平面間の距離がカメラの奥行方向に大きく変動しなければ、図の緑の面(撮像面に平行)に射影した点で近似することができ、そうすると2つの座標間の関係は

 u = k_x x' \cos \theta = k'_x x'
 (k'_x = k_x \cos \theta)

という形になります。
また、カメラ軸とy'軸を含む面で見ると、こちらはy'軸とカメラ軸が直交するのでシンプルになります。

f:id:nokixa:20220117230431p:plain

被写体平面上のy'座標と撮像面上の座標vの関係は

 v = k_y y'

となります。

これらの関係は、x'軸、y'軸上にない点についても同様で、まとめると、

 \begin{bmatrix} u \\ v \end{bmatrix}
= 
\begin{bmatrix} k'_x & 0 \\ 0 & k_y \end{bmatrix}
\begin{bmatrix} x' \\ y' \end{bmatrix}
=
\boldsymbol{K} \begin{bmatrix} x' \\ y' \end{bmatrix}

となります。

また、カメラのほうも、カメラ軸を中心として回転する自由度があります。
この回転角を\omegaとすると、回転後のu'軸、v'軸座標について、

 \begin{bmatrix} u' \\ v' \end{bmatrix}
=
\begin{bmatrix} \cos\omega & \sin\omega \\ -\sin\omega & \cos\omega \end{bmatrix}
\begin{bmatrix} u \\ v \end{bmatrix}
=
\boldsymbol{R_\omega} \begin{bmatrix} u \\ v \end{bmatrix}

となります。

f:id:nokixa:20220117230434p:plain

今までに出た変換をまとめると、被写体平面上の点(x,y)と撮像面上の点(u',v')の間に以下の関係が成り立ちます。

 \begin{bmatrix} u' \\ v' \end{bmatrix}
=
\boldsymbol{R_\omega} \boldsymbol{K} \boldsymbol{R_\varphi} \begin{bmatrix} x \\ y \end{bmatrix} 
=
\boldsymbol{M}_{\omega,k'_x,k_y,\varphi} \begin{bmatrix} x \\ y \end{bmatrix}

ここで現れた行列 \boldsymbol{M}_{\omega,k'_x,k_y,\varphi} は2x2行列で、4つの任意なパラメータを持ちますが、 これがアフィン変換の行列\boldsymbol{M}に当たるということになります。
これをちゃんと計算すれば、行列\boldsymbol{M}の要素と4つのパラメータ(\omega,k'_x,k_y,\varphi)の関係が出て、かつこれらのパラメータにより行列\boldsymbol{M}の要素を任意に設定できることが示せるかと思います。

ここで一区切り

記事が長くなったので、一旦ここで切ります。
次回からは実際にコードをいじっていこうと思います。