mardi 3 octobre 2017

Supermathematics and Artificial General Intelligence



This thread (also available in pdf form here) is about attempts to build general artificial intelligence.

Artificial general intelligence is often described to likely be be mankind's last invention.

I describe how I came to construct the supermanifold hypothesis in deep learning, (a component in another description called 'thought curvature') in relation to quantum computation.




I - Babies know physics plus they learn
In 2016, I read somewhere that babies know some physics intuitively.

Also, it is empirically observable that babies use that intuition to develop abstractions of knowledge, in a reinforcement learning like manner.



II - Algorithms for reinforcement learning and physics
Now, I knew beforehand of two types of major deep learning models, that:

(1) used reinforcement learning. (Deepmind Atari q)
(2) learn laws of physics. (Uetorch)

However:

(a) Object detectors like (2) use something called pooling to gain translation invariance over objects, so that the model learns regardless of where the object in the image is positioned.
(b) Instead, (1) excludes pooling, because (1) requires translation variance, in order for Q learning to apply on the changing positions of the objects in pixels.


III - What to create to solve problem?
As a result I sought a model that could deliver both translation invariance and variance at the same time, and reasonably, part of the solution was models that disentangled factors of variation, i.e. manifold learning frameworks.

I didn't stop my scientific thinking at manifold learning though.

Given that cognitive science may be used to constrain machine learning models (similar to how firms like Deepmind often use cognitive science as a boundary on the deep learning models they produce) I sought to create a disentanglable model that was as constrained by cognitive science, as far as algebra would permit.



IV - Approaching the task...

As a result I created something called the supermanifold hypothesis in deep learning (a component in another description called 'thought curvature').

This was due to evidence of supersymmetry in cognitive science; I compacted machine learning related algebra for disentangling, in the regime of supermanifolds. This could be seen as an extension of manifold learning in artificial intelligence.

Given that the supermanifold hypothesis compounds ϕ(x,θ,)Tw, here is an annotation of the hypothesis:
  1. Deep Learning entails ϕ(x;)Tw, that denotes the input space x, and learnt representations .
  2. Deep Learning underlines that coordinates or latent spaces in the manifold framework, are learnt features/representations, or directions that are sparse configurations of coordinates.
  3. Supermathematics entails (x,,), that denotes some x valued coordinate distribution, and by extension, directions that compact coordinates via , .
  4. As such, the aforesaid (x,,), is subject to coordinate transformation.

  5. Thereafter 1, 2, 3, 4 and supersymmetry in cognitive science, within the generalizable nature of euclidean space, reasonably effectuate ϕ(x,,)Tw.

V - An experiment: A Transverse Field Ising Spin (Super)–Hamiltonian Quantum Computation



VI - Limits


Notably, although thought curvature may turn out to be invalid in its simple description in relation to Artificial General Intelligence, there is a non-trivial possibility that the math of Supermanifolds may inspire future Deep Learning; cutting edge Deep learning work tends to consider boundaries in the biological brain, and biological brains can be evaluated using supersymmetric operations.

In broader words, I consider the following evidence:

(1) Manifolds are in the regime of very general algorithms, where many degrees of freedom are learnable, such that, for example: models gain the ability to posses translation in-variance and translation variance at the same time. (i.e. disentangling factors of variation).

(2) Given (1), and the generalizability of euclidean space, together with the instance that there persists supersymmetric measurements in biological brains, it is not absurd that Supermathematics or Lie Superalgebras (in Supermanifolds) may eventually empirically apply in Deep learning, or some other named study of hierarchical learning in research.



VII - Questions

Does anybody here have good knowledge of supermathematics or related field, to give any input on the above?

If so is it feasible to pursue the model I present in supermanifold hypothesis paper?

And if so, apart from the ones discussed in the paper, what type of (training samples) do you garner warrants reasonable experiments in the regime of the model I presented?


via International Skeptics Forum http://ift.tt/2kkAOaL

Aucun commentaire:

Enregistrer un commentaire