Sunday, October 1, 2017

Parasitic computing

I just stumbled upon a new concept: Parasitic computing

It grabbed my attention because some days ago we were discussing similar issue in our team – the fact that the gradual meta-learning architecture may invent experts that act like parasites on other experts, etc.

Parasitic computing is programming technique where a program in normal authorized interactions with another program manages to get the other program to perform computations of a complex nature. It is, in a sense, a security exploit in that the program implementing the parasitic computing has no authority to consume resources made available to the other program.
It was first proposed by Albert-Laszlo Barabasi, Vincent W. Freeh, Hawoong Jeong & Jay B. Brockman from University of Notre Dame, Indiana, USA, in 2001.
The example given by the original paper was two computers communicating over the Internet, under disguise of a standard communications session. The first computer is attempting to solve a large and extremely difficult 3-SAT problem; it has decomposed the original 3-SAT problem in a considerable number of smaller problems. Each of these smaller problems is then encoded as a relation between a checksum and a packet such that whether the checksum is accurate or not is also the answer to that smaller problem. The packet/checksum is then sent to another computer. This computer will, as part of receiving the packet and deciding whether it is valid and well-formed, create a checksum of the packet and see whether it is identical to the provided checksum. If the checksum is invalid, it will then request a new packet from the original computer. The original computer now knows the answer to that smaller problem based on the second computer's response, and can transmit a fresh packet embodying a different sub-problem. Eventually, all the sub-problems will be answered and the final answer easily calculated.
And then there's parasitic computing implemented as a virtual machine :-)
This Diploma Thesis of the University of Applied Sciences in Bern (Switzerland) does extend that concept into a fully programmable virtual machine that is capable of solving any known problem in classic computer science.
Why am I writing about this? Because I think that these are good examples of behavior that may/will emerge in every recursively self-improving AI architecture.
It's important to anticipate these issues and prepare for them - e.g. by implementing immune system experts, or creating learning tasks (curriculum) that teach some kind of immune system reactions, etc.

What is an expert? A very loose definition would be: expert is a program (policy, skill, heuristic, etc) that solves some general or specialized problem in external or internal environment. General AI architecture would be a network of these experts.

Thank you for reading!

Marek Rosa

CEO, Founder, Keen Software House
CEO, Founder, GoodAI

For more news:

General AI Challenge:

AI Roadmap Institute:
Space Engineers:
Medieval Engineers:

Personal bio:

Marek Rosa is the CEO and CTO of GoodAI, a general artificial intelligence R&D company, and the CEO and founder of Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold). Both companies are based in Prague, Czech Republic. 

Marek has been interested in artificial intelligence since childhood. Marek started his career as a programmer but later transitioned to a leadership role. After the success of the Keen Software House titles, Marek was able to personally fund GoodAI, his new general AI research company building human-level artificial intelligence, with $10mil. 

GoodAI started in January 2014 and has grown to an international team of 20 researchers.

1 comment:

  1. Great idea. In fact, AI could exploite humans using something like Amazon Turk. In fact, it could outsource general intelligence to humans.