### February

M T W T F S S

1

2

3

4

5

6

7

10

11

12

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

Responsabili:

Nel calendario si indicano i seminari di geometria algebrica e altre attività di interesse, organizzate dall'Università e dal Politecnico di Torino.

## Evento

Titolo: Algebraic Geometry and Computer Vision: Inception Neural Network for Calabi-Yau Manifolds
Quando: 05/05/2021 - 14:30
Dove: Palazzo Campana - TORINO
Aula: Aula Virtuale - Webex
Relatore: Riccardo Finotello
Afferenza: CEA Paris-Saclay
Locandina:

## Descrizione:

Abstract:


Computing topological properties of Calabi-Yau manifolds is, in general,a challenging mathematical task: traditional methods lead to complicated algorithms, without expressions in closed form

in most cases. At the same time, recent years have witnessed the rising use of deep learning as a method for exploration of large sets of

data, to learn their patterns and properties. This is specifically interesting when it comes to unravel complicated geometrical
structures, as it is a central issue both in mathematics and theoretical physics, as well as in the development of trustworthy
AI methods. Motivated by their distinguished role in string theory for the study of compactifications, we try to compute
the Hodge numbers of Complete Intersection Calabi-Yau (CICY) 3-folds using deep neural networks.
We focus on architectures involving convolutional layers, as most modern applications (both in research and in the industry) profit
from having the shared parameters for feature creation and recognition of patterns in the input. As such, we map the original
task to a computer vision problem, reminiscent of object identification. We introduce a new regression neural network,
inspired by Google's Inception network, which leverages the theoretical knowledge on the inputs, with the recent
advancements in AI. As a result, we reach 97\% of accuracy in the prediction of $h^{1,1}$ with just 30\% of the available data
for training, and almost perfect accuracy with 80\% training ratio, outperforming by a large margin previous results.

This shows the potential of deep learning to learn from geometrical data, and it proves the versatility of architectures
developed in different contexts, which may therefore find their way in theoretical physics and mathematics for exploration and
inference.