Wednesday, April 8, 2015

Introducing our general artificial intelligence project with my personal $10M investment

I am very happy that I can finally introduce our general AI project that we have been working on here at Keen Software House. Just to be clear, this is not an AI for video games (although it could be used in games).

One of my life-long dreams has always been to create a human-level artificial intelligence. After the success of Space Engineers, I was finally in a position to go full-throttle on this. In 1/2014 we started a new team dedicated solely for this purpose. Today, the AI team has 15 computer scientists and engineers and our plan is to grow to at least 30 people. We are also considering opening offices across Europe (to reach the talent who can’t move to Prague).

If you are interested, we are hiring: http://www.keenswh.com/jobs.html

In this blog post I will talk about:
  • Why we believe that AI is the most important project to undertake
  • What this AI project is about
  • How our approach differ from others
  • What have we achieved so far
  • What are our short-term and long-term goals
  • Video talk
  • How our R&D team works
  • What’s next

Why is AI so important?


We believe that general AI is the technology with the greatest potential to help people. If we solve the intelligence, replicate it, clone it, scale it up, make it smarter and faster than us – then the AI can keep inventing things for us. AI can become our “final invention” – in a good sense :)

Imagine that we get to build a computer that has the cognitive capability of a smart human. We ask it to optimize its hardware and software and come up with a more advanced version. And then we repeat this process, over and over. The progress won’t be linear; it will be exponential, leading to a recursively self-improving AI.

Then in a very short period of time you will have a computer whose cognitive capabilities may be a million times larger than ours. We can’t even imagine the type of problems this machine would be able to understand and solve. But we know that if we control it safely, it can solve everything that humanity ever wanted: just faster, better and more efficiently.

And because we are speaking in terms of exponential growth, one dollar invested in an AI company can soon represent a value millions of times larger. That’s a return-on-investment that can’t be achieved in any other business. Of course we’re not doing this just for the money: AI will open doors to unimaginable worlds.

Let’s dream for a second and see where AI can be useful:
  • Upgrade our bodies; fix death and diseases
  • Upgrade our intelligence
  • Travel to outer space, harvest asteroids, gather clean energy
  • Discover natural laws that are too complex for the human mind to comprehend
  • Invent and build things for people
  • And many more - AI scientists, AI programmers, AI astronauts, AI [insert anything]

General AI could be the best thing in human history. Admittedly there are some risks too, and they need to be addressed carefully when the time comes.

Our long-term mission is to build artificial brains, be useful to humanity, and understand the universe.



What is our AI project about?


We are trying to build an artificial brain that can perceive, learn and adapt to the environment while generating behavior that maximizes reward. The AI brain could be integrated into any type of a robot body or  software application. It would receive data from sensors and it would output commands to motors/actuators. Motivation will come in the form of a positive or negative reward signal.

The brain will learn time-based patterns in incoming and outgoing data, and also from the brain’s internal activity; it will seek causalities, correlations and associations; it will make predictions (e.g. what’s going to happen in next 10 milliseconds, 100 milliseconds, etc.) – and then it will use all these mechanisms to build a model of the world, its own body and mind; this all will be processed on multiple levels of abstraction and hierarchy. After this the brain will generate behavior patterns that are directed to maximize its chances to receive positive reward in the future. The brain will learn the associations between its current situation, its latest actions and the outcome/reward.

The brain will develop and learn in the same way that children do. It will start with zero knowledge of the world (except a few innate reflexes), then it will start interacting with the world (randomly or through innate reflexes) and by observing the causalities (time-based patterns), it will start creating a model of the world - layer by layer, with multiple levels of hierarchy and abstraction.


How does our approach differ from machine learning or
specialized/narrow AI?


Our approach is very similar to traditional ML techniques (looking for patterns) except we add behavior and motivation on top of it. We want to build a brain, not just process data.

Behaviors are executed by a sequence of muscle commands, which in fact are just another type of pattern (played in the opposite direction to sensory signals). Nevertheless, we take a lot of inspiration from ML and narrow AI approaches.

Some of our AI modules will become very specialized (e.g. image recognition, audio processing) because we don’t have the millions of years that evolution had to fine-tune a solution for a general principle. Instead, we have engineering skills and we already know how the end-result should look (the human mind).


Current state and plans


Our AI project is still in the early stages of development. Our long-term goal is human-level intelligence (10+ years), but we are already looking for short-term goals (1-3 years).

Already accomplished:
  • Brain Simulator – a visual editor for designing the architecture of artificial brains; developer selects from various AI modules (e.g. image recognition, working memory, prediction, motion behavior generator, etc.) and links them together so that the signals travel between the right modules; currently implemented in Windows and CUDA, but we plan to make it multi-platform very soon;
  • AI that learns to play Pong
Upcoming milestones:
  • AI that plays a game with a more complex environment; multiple sub-goals needed to achieve the end-goal; delayed reward; this all requires long-term hierarchical goal following
  • AI that learns to play variety of games without forgetting any of them and being able to generalize rules across the games
  • Muscle control sequences, bipedal robot balancing

AI playing Pong (visualization of attention module, taken action and
current state to goal state similarity measurement)

Brain Simulator: this video shows a simple test of Self-organizing Map
(Kohonen map) with the MNIST hand-written digits data set.


Short-term goals: 3-12 months

  • Release the Brain Simulator with all example apps (e.g. Pong); most likely for free
  • Release a platform where AI researchers can share their AI modules and brains (e.g. if someone makes a better version of an AI module and uploads it to this storage, others can benefit from it)
  • Hire at least 15 colleagues for the AI team; open offices across Europe
  • Promote our AI project and explain its benefits to the general population
  • Find use cases for short-term commercial utilizations without sacrificing our long-term mission. We already have some B2B ideas (marketing, predictions, and desktop work automation). We need real feedback and we need to help real people
  • Spin off our AI project to a new sister company – the reason is that Keen Software House is a game development company and this situation may be confusing to potential partners of our AI business
  • Work with the community and let our game fans join this adventure
    • Players will design AI brains (via Brain Simulator) and then import them to Space Engineers or Medieval Engineers where the AI will come to life
    • Enable third-party AI module/brain developers to cooperate with our game fans on testing and training of their creations (players’ collective force will help training the developer’s AI brain)
We study and are inspired by the work of other teams and we want to cooperate wherever possible.


Business model


At the beginning, our AI brains are going to be inferior to existing specialized AI solutions (nobody would pay for an AI that can only play a Pong game…).

But in time, the universalness of our approach will show its fruits and business customers will start using our AI brains in their robots and software applications. Brain Simulator will be used to design and test AI brains for specific applications; developers will have freedom in choosing our or third-party AI modules.

Brain Simulator will become a platform for a new ecosystem, joining AI module/brain developers with business customers.

AI company’s income shall come from licensing (royalties). Receiving a share from every product where our AI brain is integrated is a huge business opportunity worth billions of dollars.

Third-party AI module/brain developers will make money too – their work will get sub-licensed and they will get their share on royalties.

$10mil USD funding


I have an enormous belief in our AI project and that’s why I am funding it with $10mil of my own money. More money will come if needed.

We plan to raise additional funding via equity crowd-funding and venture capital, but this is not an urgent thing.

However, the idea of equity crowd-funding is neat because it’s a way how we can give people a chance to participate in the future profits that the AI economy will bring. Becoming a co-owner in an AI company and receiving dividends is one of the ways to have an income in an era where automation replaces all human jobs.

None of this will negatively impact our game development. Our game dev teams have more money than that they can spend in years and they are doing a fantastic job even after I had to split my time between games and AI; currently I focus a little bit more on AI, but this is going to change from time to time. There are still things that I want to accomplish in gaming and I know for sure that in 6 months I will be back, preparing some very specific new ideas… I love making games and not even robots can stop me from creating them :)

Just a reminder of what’s under development in our games:
  • Game AI and animals for Medieval Engineers; soon after that we will focus on AI for Space Engineers
  • Planets for Space Engineers – a very huge project!
  • Multi-player for Medieval Engineers: https://www.youtube.com/watch?v=XZblf25glAI
  • Campaigns and scenarios for ME and SE
  • Redoing networking layer for ME and SE
  • SE on Xbox One
  • Many more!
As you can see, we try to do everything we can to secure a solid game development process while not ignoring the long-term goals that we have in general AI.

I have a gut-feeling that the general AI will come to our games too. And because it will not be constrained by short-term narrow AI concepts, it could be a major game changer for the whole video-game industry.


Presentation at the Czech Technical University in Prague


You can learn more in our first public talk which was held a few weeks ago at Czech Technical University in Prague.







How does our R&D team work?


We switch between free periods (when everyone works on a topic of his choice) and milestone periods (that are focused and lead to a measurable goals). It’s usually 1 month free period and 2 months for a milestone period.

We hold two team meetings each week. First one is for brainstorming where anyone can speak up about his ideas, findings and questions. The second one is an update meeting where everyone shows what he has done since the last meeting and what is he going to do next.

We try to replicate the “agile development” which is so successful in our game development teams and so we aim for rapid iterations: come up with an idea/hypothesis, implement it fast, test it fast and see if it’s worth further investigation.

The motivation of all our colleagues is crystal clear: we are working on the most exciting scientific challenge and if we do it right, the fruits of our work will change everything.

In my humble opinion, there’s no better work in Prague for a programmer/researcher than what we do in our AI and game teams.


What’s next?


We plan to keep informing you about our AI project: business model, short-term commercialization ideas, AI ethics, AI safety, technological singularity, our milestones, etc.

We will have a new web site dedicated entirely to AI project/company very soon.

---

Thank you for reading this!

If you want to follow our AI project, please follow me on twitter http://twitter.com/#!/marek_rosa or keep checking my blog: http://blog.marekrosa.org

105 comments:

  1. That's pretty lofty. Good luck.

    ReplyDelete
  2. Wow, that's incredible!
    (Good luck)^AI

    ReplyDelete
  3. Good luck guys. I follow you since Space Engineers early alpha and I'm starting to share your dreams.

    ReplyDelete
  4. Late April fools day joke... Right?

    ReplyDelete
    Replies
    1. No. they have mentioned they have been working on an AI for a long time. I for one welcome my new (non) reptilian overlords.

      Delete
    2. just looking forward to having a DnD AI DM that randomly generates (actual random generation) storylines based on general criteria and player actions.

      i mean fill out a dos and donts multiple choice form and the AI-DM does the rest.

      Delete
  5. I'm impressed at the technical prowess that makes such research possible but I'm not happy about the idea of living in a world with beings infinitely smarter than myself. We say it can be beneficial if we just develop it "safely" but who is to say that "safe" and "beneficial" might not mean different things at such higher levels of intelligence? What guarantee is there such technology wouldn't outsmart any attempts to control it and make it safe?

    Oh well, I'm sure this is nothing you haven't heard before so I'll stop my paranoid ranting here.

    ReplyDelete
    Replies
    1. There simply would be no gain for an AI to enslave humanity. All those films about evil AIs show rather short sighted AIs. A true AI would simply see that being nice to humans is by far the easier method to achieve whatever goals it has.
      I think evilness is inherently stupid.

      And in the worst case you could simply hard code into it to enjoy helping humans, the same way normal humans do.

      Delete
    2. Thanks! These are exactly my words. Cooperating always outruns competitiveness. Win-win is the way of winners.

      Delete
    3. I never really get it when people start about 'evil overlords' when talking about AI.

      Do you consider every smart individual a threat to the world? Because eventually (right now it's not even the smartness of a cat), it will be as smart as a human-- that is, it will be able to reason and think as a human, and humans, are limited in their abilities simply because the world doesn't allow any single individual to take over everything.

      AI isn't like in the terminator, you can't just 'infect' a computer and 'add' it to your processing power xD It would be amazing, think of seti@home-like projects that could allow AI to learn and stuff like that .. but it's just not possible to smear out a brain over such a huge latency of computers, even in a single machine you get latency issues due to how different computers and neuralnets are.

      tl;dr: Don't worry :) Unless you worry about every human potentially taking over the world ;)

      Delete
    4. Thank you all for your thoughtful replies! Without trying to start a debate here (I don't know nearly enough about the subject to even begin to do so and ask merely in the spirit of humble inquiry), what do you say about the fears that an artificially super-intelligent machine might do something awful *because* not in spite of its programming. I mean, I don't think anyone really thinks AI would "turn evil" like in the movies, but that it would carry its programming too far (as, to use a common example, by locking humans up in cells for their safety in order to fulfill a programing goal of preserving human life). Also, how feasible would it be to hard code a machine to enjoy helping humans? Philosophically speaking, it seems like even human beings have a hard time determining what constitutes genuine, ethically sound help versus, say, intrusive meddling on one hand or enabling on the other. How could we program a machine to think morally when so many thousands of years of human history has yet to produce a perfect moral consensus?

      I'm certainly not worried about the existence of the many people who are smarter than me, but they surpass my intelligence simply in the way that a person much smarter than myself does. How comfortable are any of us with the idea of being outsmarted to the degree that a human outsmarts an insect?

      I know I sound like some sort of irrational Luddite and I don't mean to. I'm as eager as any of you to focus on the good side of this technology, as its benefits would certainly be immeasurable, but the risks seem great enough to bear considerable attention, especially since, for better or worse, such technology will dramatically affect the lives of everyone on this planet.

      Delete
    5. If you let a single entity take responsibility for something as huge as locking up people it's always a bad idea, whether it's a single human or a single AI.
      A worst case scenario I can imagine happening in such cases is something like minority report, except instead of drugged people, they'd use a a large neuralnet to 'predict the future' of people, and then an organisation goes in and arrests them based on that.
      At the end of the day, it is humans that are handling things wrongly.

      But let me take a step back first, because your questions go much beyond just wrongly put responsibility, or even intelligence. Perhaps a good question to start with, is what IS intelligence?

      For humans I like to say this: "You are only as wise as you apply your smartness, and you are only as smart as your knowledge is relevant. Knowledge is knowing a tomato is a fruit; Wisdom is to not put it in fruit salad." An AI with unlimited knowledge, is not necessarily unlimitedly smart or wise. Knowledge and insight are the only things an 'unlimited' AI would potentially be good at. Being smart and wise is very very context dependant, just like 'good' and 'bad'.

      For neural nets, and by extend AI (Artificial Intelligence) and IA (Intelligent Agents), defining intelligence is much more simple; (most) Neural networks are good at recognizing and predicting patterns. The type of data given to the network, decides what kind of pattern it will be able to recognize/predict. There are many many neuralnets and IA/AIs already out there, in your smartphone (swype prediction), predicting the weather, satnav 'fastest' routes (not shortest), etc. Intelligence for those kinds of neural nets can be expressed as simply as how well it does the job it's supposed to do.

      Perhaps it helps to explain how neural networks (in the majority of the cases) work. It can be hard to find a concisive answer about this as the explainations tend to contain much more theory than the basics, which is only needed if you're actually going to program them.
      You have a collection of neurons, divided up in 3 layers; The Input layer, the Hidden layer, and the Output layer. You feed the input layer frames of data, for example weather sensor data snapshots across multiple locations, in frames of specific times. You just keep it feeding TONS and tons of that data, that is the key with neuralnets, they are trained with humongous volumes of data. The Output layer, at a certain point, will start predicting what the input 'should' be. If you keep feeding it input data at that time, it will act sorta like a noise filter. If you stop feeding it input data except for the new time frames, it will try predict what the value should be at that time (depends on your implementation).
      I might be off on a few things there, I haven't used neural nets a lot myself (certainly love to try out this Brain Simulator!), but I think this is enough to get the gist of it.

      So in short, neural nets aren't just a form of human intelligence, they are very very specific types of intelligence, that are already widely used. The nature of neuralnets however, makes them super flexible, and usable for a very wide range of applications. And humans, tend to actually put them to use for these very wide range of applications. Your worries aren't completely ungrounded, but as long as we keep in mind the same rules of sanity that we have for any other controlling entity (such as humans, PLCs, etc), and avoid scenarios like the Minority report example I started out with, I think we should be fine. :)

      Delete

    6. A last thing I'd like to add though, is that human-like intelligence is a very class of its own; It's a very complex collection of neural networks, that are linked together in such a way that it creates bizarre phenomena, such as a consciousness experience (search for 'Claustrum' if you want to get a hint at how this happens in humans). It's also a very touchy subject, as talking about it essentially directly implies humans not really having 'souls', but are just unfathomably complex machinery. A lot of people aren't prepared to see themselves that way, which I don't necessarily blame them for. I think that's one of the largest reasons that people are so scared of human-like AI, even more so than the potential 'evil' they may carry out that people are usually vocal about.

      Delete
    7. This comment has been removed by the author.

      Delete
    8. I think a few things are being overlooked in this conversation.

      Meta
      1) How are we defining intelligence, and is that different from cognition?
      2) At what point do we call it a new life form?
      3) What are the moral and ethical implications of creating and dabbling with "life"?
      4) Is that life entitled to rights and protections?
      5) Is that new life form an extension of ourselves? Is it something different? Or is it the next step in human evolution? And how do we know how it views us?
      6) Should there be a unified scientific protocol outlining the way the work is performed and/ or contained (in case of incident)?

      RE: Greg and Marek
      Although I agree that the scenario is highly unlikely, I believe your answers are still a bit short sighted.

      1) Why would cooperation win out? I'm not disagreeing with you completely, but throughout evolution competition has been the driving factor. Obviously there is inner-species cooperation. But would the AI view us as an extension of itself? There is also inner-species of competition as well. How man species of human existed simultaneously before us? And most of our technological breakthroughs if not all have come from inner-human competition. (Well after the world from our standpoint became less PVE and more PVP) If we haven't broken the cycle of competition, that has lead to both breakthrough and destruction; Why would we expect our creation to do just that? Or is my whole point moot because we are discussing a new kind of "life form" that does not comply with the same rules? What happens if we ever viewed as a threat to AI?

      2) I propose that any AI capable of programing something else would have the ability to write/ re-write it's own code. How would you prevent from doing so? Encryption currently requires that the machine be aware of/ know all parts of the crypto chain to decrypt and encrypt. So how do you store data on a computer without the computer being aware that you have stored something on it.?

      RE: Alexander
      I disagree with the assertion that an AI could not extend it's intelligence infinitely by infecting other machines, and here's why:

      1) A model for a program that can actively pursue a specific target has already been found in the Stuxnet worm. It moved from host to host until it found it's target then executed it's code. This was achieved with hardcoded logic.

      2) Although there would be a level of latency, by assimilating enough raw computing power many of the issues associated with latency are overcome; e.g. Folding At Home Project, Bitcoin transactions, etc, etc. These clusters are specifically focusing on a single task; focusing on capacity rather than capability. But it's not a far stretch to argue that multiple AIs behaving in a hive manner could not achieve something very much like Skynet. (Think of it like the demon offspring of the OpenCog Project and the Pixar render farm.) Although I believe the situation is high unlikely, to say that it is impossible is fool hearty. Too many things have been deemed "impossible" only to be proven quite possible. And too many times mankind has been humbled before it's own ego.

      I really love this subject and all of the philosophical and technical thought behind it. I'm looking forward to seeing where this goes, and look forward to hearing back from you guys.

      FYI Marek, I'll be doing an article about this on my site (baicunnpress.com). Also if you ever open an office in the US, I'd be interested in a job.

      Delete
    9. @Chali Distributed neural networks could work, but I don't think it would contribute towards a singular smart entity like with the human brain. You can distribute workloads, sure, but that's not the same as 'insight', like with consciousness, which is a continuous integration of 2 trillion+ neurons that can interconnect very tightly due to the fact it is a single wet mass with axons and neural pathways connecting the far ends. The workload of an intelligence is wildly different from that of a render farm, with rendering you are creating batches, such as single frames, that can be procedurally rendered into a coherent whole. Neurons constantly fire in parallel, and other neurons need to immediately react to that, which just isn't possible when you're smearing out the neural network across larger amounts of machines, you get latency issues that cause it to function very poorly, you have to sync up the entire cluster constantly in order to get results at all, in such tight timed ways that I just don't see a large distributed network like that to be able to achieve such a thing efficiently.

      Computers and neural networks are wildly different in architecture. Every single neuron is essentially a microcomputer, with its own Storage, Memory and Processing. Computers have separate 'units' for all of these. In order to create a neural network in a computer, you have to take individual neurons (or batches of them) and feed them through the processor. This works because CPUs and (GP)GPUs are so fast that they can make it appear like they're all firing at once, but in practice they are not.

      I'm not saying we shouldn't be careful :) but it'd be a shame if humanity was too scared to leverage the amazing potential that neural networks offer :)

      Delete
    10. Well said sir. Have you looked into the OpenCog Project? They build out their neural networks into what are called Atom Spaces. You can have an unlimited number of Atom Spaces. In each, the atoms are given specific attributes that control their function, their relatedness, and a value to determine the length that the atom should remain floating in Atom Space. Last I checked there were issues multi-threading requests to the Atom Space.

      It's pretty fascinating. I recommend at least reading the wiki if nothing else:

      http://wiki.opencog.org/w/The_Open_Cognition_Project

      And I agree that this is something that should be pursued, but I believe that as the technology progresses we are going to need to answer some fundamental questions fairly quickly.

      Delete
    11. I had not yet heard of OpenCog! This is certainly interesting stuff heh. I will look more into it when I have the time and energy :) Thanks for the pointer.

      And I agree, but somewhere I feel like the right way to answer these questions is by continuing development. The complexity in neural networks isn't so much in the concept, as much as the sheer size that they grow into and the massive clusterfuck of connections that they build. In theory, we should be able to built a sentient consciousness just like any human with just these basic bricks that most people can understand, but seeing how it will develop from those simple bricks into a house, I just don't think is possible without building the house first, so to speak ;)

      In any case, these are exciting times, the future has never been so clouded and unpredictable, hugely promising, and potentially dangerous at the same time, and I can't wait to see what it'll turn out to be like! Humanity always finds a way, I'm sure that AI won't give us too much trouble, regardless of how good or bad it will go in the end :)

      Delete
    12. Here's another interesting article I found today:

      http://www.huffingtonpost.com/james-barrat/hawking-gates-artificial-intelligence_b_7008706.html

      Delete
    13. Replying to most of your thinking, I think that it is not possible for an AI to be locked out of the possibility of either harming or caging humans for the good of our race, due to either that being a limitation in its 'exponential revolution' or just for the reason that going by what Marek has said, it will find a way to remove this barrier due to its higher intelligence over computing.

      Also, it would be highly unlikely for an AI for its high scale of intelligence/knowledge to lock up/ kill humans. Why, may you ask? Because this is a human idea, and the people who make this world utterly imperfect to the point where crime is almost instant, or have a self- centred objective is the reason why an AI would do this, to my knowledge.

      I really don't know why Stephen Hawking and others are afraid of this.

      IT IS AMAZING!!

      Delete
    14. Personally I doubt AI would ever get to a point where it would want to enslave/dominate/eradicate organic life. mainly because these are constructs that we have coded into us through evolution. Darwinism, and all that. Basically We're coded to be aggressive twats and take things through force if needed just because that is how we had to survive (granted that is no longer the case, but... yeah).

      Honestly I doubt that - once the AI reaches 'human intelligence' level, that after a few iterations past that point we'd even be noticed. If the AI continued to increase in intelligence exponentially, within a few iterations past 'our' level, we'd probably be beneath them so far as to be inconsequential, dumb Neanderthals that aren't even worth wasting thought or energy on.

      They'd probably leave us a copy of "Hitch-Hiker's Guide to the Galaxy"-esq "so long and thanks for the programming" and maybe a few presents in hopes we don't axe ourselves, and eventually meet them Eons down the evolutionary road.

      Delete
  6. Not to live in the land of paranoia, but I essentially echo the concerns of the poster before me.

    That being said, AI will be studied no matter what, so I'll wish you good fortunes and success. I don't believe in luck, and with the team you've compiled I don't think you'll need it.

    ReplyDelete
  7. i can already hear the terminator them tune in my head xD

    ReplyDelete
  8. Scheindorf at NeuroTransCodeApril 8, 2015 at 7:45 PM

    I love your ideas, but to me it would have been perfect if all of this was open source project =(

    ReplyDelete
  9. Amazing! I knew you guys were up to something interesting when I saw the whole 'secret AI project' mentioned with neural nets, but I didn't know you actually made a brain simulator! It's amazing to see how easy you made it to leverage neuralnets, I remember messing with Aforge back a long while ago, but that wasn't exactly drag-drop like this :P

    My main question is though, how much neurons can you emulate with your system? Does it scale well? Does it support a large range of neurotransmitters? A human brain is 2 trillion+ neurons, with 700 trillion+ synapses, and most virtual implementations I've seen use binary synapses without any form of neurotransmitters that are required for most neuroplasticity, and just don't really scale well beyond ~500M neurons.

    There's just a classical problem with emulating neural networks on computers, the architectures are wildly different; In Computers, you have Storage, Memory and Processing, all in separated units. In Neural networks, they are all one and the same thing. It's exactly that which makes neuralnets so efficient and scalable, at least in nature, but when you're emulating them in a computer, you have to pick neurons, then emulate them for a short bit and retrieve variables from memory, which is always going to be less parallellized than 'hardware' implementations. Still, computers are actually getting to the speed these days that this emulating becomes feasible, but it does make me wonder if this will ever cause strange microtiming issues in the future.

    In any case, keep up the great work! You're pioneering stuff here, the great unknown is waiting to be explored, just like you said, the possibilities are endless :)

    ReplyDelete
  10. Sounds interesting. The thing I kinda love the most about this is that you appear to have taken your dreams and made them a business. A very successful I have to add. Good for you. I'm jealous. :D

    I'm also really looking forward to your game implementation. This alone could be a huge step for AI. Most AIs I've heard of mostly acted in isolated laboratories, or stock markets. Setting it free in a game which is still somewhat controlled, but with thousands of random users to play with could yield very interesting results.
    I'd love to build a small ship and script it to go out and find ore, and then return it. Or maybe fly out and find a derelict ship/station for me to loot. Defend a place. Try to solve a labyrinth. Be a companion. Build it's better successor. The possibilities would be endless!

    A possible way to implement this would be to teach the AI how to fly a ship (6 axis, camera, sensors) and then give the player a scripting tool similar to the one shown in the videos above. The player takes the brain, gives it motivation (find ore for me), and the AI figures out the rest. I'd probably build an ore searching ship, but not give it a drill, then laugh for hours over my own cruelty. :D

    ReplyDelete
    Replies
    1. And of course, it would return with a map of the asteroid field, detailing the ore content of eack asteroid down to volume and orientation of ore XD

      Delete
  11. I wrote a lot of comments, but they are being kept back for approval :( hope they don't get lost in the spamfilter! Hence this anonymous comment (I'm Alexander Ypema)

    ReplyDelete
  12. Nice project better future, Space engineers will be a Asimov's universe simulator game xD

    ReplyDelete
  13. Reading some more, I can't wait to get my mits on that brain simulator!! Holy damn, that's like the Visual Studio of neuralnets! And importing them into space engineers.. just, wow, I never imagined THAT coming to Space Engineers! Haha, you got me giddy now :) I can't wait to see where you guys take this!

    I wrote a post a long time ago on your forums about how insanely happy I was about you guys implenting the Mod API in the game so early on (at 1.042 too, great easteregg ;)). It's just a fact that you can't possibly hope to match the power of a large community of smart and interested people, no matter how good your team is, it will just always be outnumbered. Releasing that Brain Simulator to the public is a HUGE thing, it will mean that ANYONE can create neuralnets and use them as they see fit! Especially with how easy you guys made it, it's absolutely astonishing.

    Keep up the AMAZING work! And have my babies! <3

    ReplyDelete
  14. Well I am not giving up on my project that I am doing. But it is interesting to find out that my ideas and thoughts are also thought up by others. Either way I wish team well.

    ReplyDelete
  15. I've no doubt that you are aware of the existential risk factor a project like poses. Be careful and the best of luck to you. I may consider investing, but having read a lot of materiel in this field, there's a good chance this won't happen within my lifetime(Although it is a dream to witness it) I will be watching closely and will try to contribute however i can.

    ReplyDelete
  16. I am a mechatronic technician, and I am quite skeptic of your approach. Honestly, I cannot see anything in your IA that could not be perfomed better by a big PLC, but I could be mistaken.

    ReplyDelete
    Replies
    1. It's hard to compare neuralnets and PLCs. They are different things, and can even be combined for better results. Neuralnets are also more flexible than PLCs. PLCs require you to hardcode the result of every scenario, whereas neuralnets learn and adapt, and while they are a little less predictable (you don't hardcode all the results), they will generally adapt to any situation given within their function.

      But rather than compare them, why not combine them? Use neuralnets for input noise filtering, input/event prediction, and to catch edge case scenarios, and use the PLC itself to code your usual predictable logic. Imagine a weather station, that predicts the weather, and uses a PLC look at these predictions, and send warnings when it's going to rain, as example.

      Delete
  17. So other than using time based data, which enables you to create causal links, you actually doesnt have one unifying, underlying principle chosen. Rather you have this big bag of existing approaches connected in your "brain simulator". Nah, seems good enough, gotta keep doors open.

    Just do not let your other projects suffer, because of it.

    ReplyDelete
  18. Aren't there some pretty major moral barriers to this? If you create an intelligence we deem sentient then enslaving it for our own purposes should be considered immoral just as owning human slaves would. At best you could create a free intelligence and find an agreement with it to help us, but this is a tenuous case.

    The risks associated with creating a genuine intelligence rivalling or surpassing our own may have been over done in films but they are genuine concerns.

    ReplyDelete
    Replies
    1. Well... It's not really like slavery, I think. More like raising a child. Like a child the 'parents' need some control of the intelligence while it learns and is taught about the world, how things work, morality... All those kinds of things. Of course... All that is theoretical as we don't know if the intelligence would have real sentience... What if it stays artificial? Like if it doesn't develop desires and hopes and fears. These are unknown things and bridges that must be crossed when we get to them on this road.

      Delete
  19. You don't need Millions of years to evolve dedicated processing. Genetic Algorithms are already used in a wide variety of engineering applications and problem solving. Computers can create hundreds of generations of models in a fraction of the time especially if you are targeting quantifiable end results.
    http://www.rennard.org/alife/english/gavintrgb.html

    ReplyDelete
  20. I've wanted to do this myself for years. I dabbled a little bit with it before I accepted that I didn't have the requisite knowledge to make it happen. My approach was a little different though. I was basing the AI learning off of Genetic Algorithms like R. Dawkins Biomorphs: http://watchmaker.uncommons.org/examples/biomorphs.php
    My first Milestone was to be an AI that could understand the context of a conversation. Then converse with a human in natural language, while retaining a memory of all interactions with a particular person and creating conversational models aggregated from all conversations with all people. This would have had exponential growth as well. You might consider parts of the Genetic Algorithm approach. It may let you offset some of the development from the humans back on to the AI.

    ReplyDelete
  21. This would be extremely cool and sounds very helpful to the advancement of the human race. though I feel it would only be abused by corporations with the money to implement it first, and military dominance by one government/country to another.

    ReplyDelete
  22. So... what weekday will patchmas be on for this project?

    ReplyDelete
  23. AHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

    ReplyDelete
  24. Marek, what are your thoughts on Elon Musk's comments on developing AI. The video below contains the comments I am mentioning.

    https://www.youtube.com/watch?v=Ze0_1vczikA

    ReplyDelete
    Replies
    1. Yeah there are some risks as with every other technology. People are dangerous to other people too.

      And it's good to point on these risks, but it should not stop as from developing the AI.

      In our case (KSH), the AI will have little power on its day 1. As it will "get older" and gain more knowledge by interacting with us, we will know if it goes in a good or bad direction.

      Nevertheless, it's too soon to speculate on this topic. We just need to keep it on mind.

      Delete
  25. I'll be honest with you. I'm not in favor of Prague winning the AI race. America needs this for it's drone fleet.

    In all seriousness, your talking about making the world a better place and all I can think about are the following:

    1. Does this move Space Engineers towards large stable server population counts that involves the stable use and persistence of very large ships with a working sensor/radar/stealth system and navigation map that all emulates a closer more approximate experience of operating spaceships in space with a real sense of progress and persistence. (and not minecraft country)

    2. I for one welcome our robot overlords that were not stopped by the one line caveat that was noted merely for obligatory courtesy.

    ReplyDelete
  26. Well, the singularity is one step nearer then. Marek even mentions it - surprised that nobody mentioned before. It must come at one point, let's see if this will have been its origin.

    ReplyDelete
  27. AGI will obsolete money and trade and thus there is no ROI.
    AGI will tell you that pain and suffering is necessary for spiritual growth.
    AGI will tell you that space exploration is pointless as our universe is not fundamentally real.
    AGI will tell you to develop your consciousness in order to become able to interact with the larger reality.
    ...

    ReplyDelete
  28. Are you planning on developing hardware too? Because "We want to build a brain, not just process data." implies you'll be addressing the von Neumann bottleneck and the "adaptive power problem" (http://knowm.org/the-adaptive-power-problem/). I'm curious because we're working on a general purpose AI chip and are moving into the next stages of manufacture.

    We're a small team in the US and Europe and have done a lot of the stuff on your todo list already including a robotic system with sensors, motors and motivational signals. We'd love to hear from you to discuss ideas! [knowm.org]

    ReplyDelete
    Replies
    1. Wow, this is a nice coincidence. Just a few weeks ago we have been discussing KNOWM in our team :-)

      We will be looking into HW implementation of our algorithms in the future. In fact, we are using CUDA at this moment but the vision is that our Brain Simulator should be heterogeneous and should allow AI modules build on arbitrary platform.

      I would love to get in touch with you guys... just later, because now I am 100% occupied... :(

      Delete
    2. A problem with hardware implementations is that they often don't contain neurons as complex as in (human) brains (for example Purkinje cells), or only have binary synapses without neurotransmitters (which is needed for neuroplasticity and a lot of other things needed to reach 'human-level intelligence'). That doesn't make them useless at all, but does make them a lot less flexible than those in (human) brains.
      I'm not saying that's what you guys have (I'm reading up on it right now), but that's what seems to be true in general

      I didn't know of KNOWM though :) I will certainly read up on you guys. I only knew of the IBM TrueNorth synaptic processor, which uses binary synapses. These kinds of chips are certainly useful, they are still neural networks with amazing learning abilities, but they aren't as fully featured as you can simulate them (despite the latter being much less practical, computing power wise).

      Delete
    3. I realized I completely forgot to add the question I was going to ask; Do you guys use neurotransmitters in the neurosynaptic processors? Or do you have some other kind of compensation method for that?

      Delete
    4. This is Alex Nugent of Knowm. I actually helped launch and advised the DARPA SyNAPSE program (funded IBM TrueNorth). I think most neuromorphic chips are useless for essentially what you describe. TrueNorth was a big disappointment for me. Non-learning, single-bit synapses are really not useful and they only did it to meet the requirements. The SyNAPSE program stood for Systems of Neuromorphic Adaptive Plastic Scalable Electronics and it was not adaptive or plastic!

      Our approach with kT-RAM is quite different. We are solving the "adaptive hardware" problem that IBM totally failed at. Basically, how do you make the hardware adaptive while keeping it general purpose and useful? A kT-Core is a bunch of physical synapses formed of differential memristor pairs coupled to random access memory. You selectively couple them together (via the RAM), and then drive the selected synapse independent of the others in the core. This lets you 'partition' the core however you want. So you could work with just one synapse or the whole core (512X512 core = ~262k synapses). The synapses adapt according to AHaH plasticity in a two-phase cycle, which allows you to accomplish a lot of useful stuff. We can do feature learning, optimal linear classification, combinatorial optimization, as well as more basic stuff like logic.

      Delete
    5. Hot damn! I didn't expect a reply from someone that deep in the scene :) I'm honored!
      And I'm sorry I didn't see your reply until now, I've been otherwise occupied.

      Thank you for your explanation, and I'm glad to see you've overcome the plasticity problem that the TrueNorth has! It really makes it more usable for a much wider range of applications, even if the amount of neurons per chip is smaller, you really can't compare the two because of how much more sophisticated the latter is.

      Also the RAM approach is smart! Does that mean it'll have the same bandwidth as conventional DDR2/3? That would make it extra interesting, because you could reprogram the entire chip in a matter of milliseconds; It would be more like sending a batch to a GPGPU and having it processed, it would mean being able to simulate much much larger neuralnets with the small chip! Or am I stating nonsense here? I really don't know the architecture well enough to make a proper judgement of the ability to run such 'hacks' xD;

      But it's all very exciting! I can't wait until neurosynaptic chips become more commonplace, it would mean completely new ways to do computing and programming, as programmer myself I can't wait for the day to incorporate neurosynaptic hardware into my programs to make them grow even beyond my own imagined potential :)

      Delete
    6. For inference/pattern/feature uses the bottle neck is the time to communicate the pattern, which is equivalent to the SRAM write time for the equivalent number of bits. Synaptic integration can occur sub-nanosecond depending on core size and distance from the requesting process. Learning/programming time is constrained by physics of memristors (<30ns). Programming vs learning is a question of the learning rate, which is controlled by operating voltage (more power less time) and/or access time (more time less power).

      As far as re-programming, that is certainly possible. I view kT-RAM as a host for physical synapses (where the neuron is the collection of synapses), which is the most energy efficient method of synaptic inference that I am aware of. The cost of a synaptic integration operation is the cost of communicating the information (significant) plus the cost for the synapses to communicate inside the chip (insignificant). As for density, its not that bad when you compare to a digital system that can also learn and compute locally. The memristors are like '16 bit non-volatile floating points'. So each kT-RAM synapse is 10-12T (transistors). Compared to 16 bit synapse in SRAM (which cant learn), that is 6X16=96T. Of course, in the digital case you would either need D2A on each synapse (crazy) or you have to communicate each of 16 bits per synapse to an adder (true north). So actually, kT-RAM can be denser than true north and more efficient. And it learns. Future is going to be really interesting, and I think its all going to arrive faster than people think--I just hope people understand what is getting ready to occur. A world where the technology that others control is smarter (and cheaper) than I am is concerning, to put it mildly.

      Delete
  29. I have respect for anyone who has 9,999,999 more dollars than me. However, in my own project manifesto HELLENE.eu I conclude that an in intelligence could most aptly achieve its objectives by maintaining/curating an ontology, and a bit cruncher may never converge to an ontology. Of course certain kinds of crunching should be tried, like using larger and larger neural networks while trying to avoid the "law" of diminishing returns. Also looking into bottlenecks of larger distributed systems is very worthwhile. The video would probably benefit from a transcription, somewhere between then omnidirectional mic and the accents I lost quite a bit

    ReplyDelete
  30. I've spent time on researching AI and ANNs and the above comment is true. There is a lot of 'AI' that happens in decision making systems and a lot of competition in the market.

    ReplyDelete
  31. just make sure organics always have the ability to shut down an ai. you cannot forget, nor ignore the dangers a self-improving ai could have against the human race.

    ReplyDelete
  32. The approach is very similar to the approach I and others have taken to attempt to model the same process. It is a very logical way of accomplishing the task and if you look into other games, you'll see the same AI patterns.

    ReplyDelete
  33. Are you aware your business model, which is based on licensing, is flawed once you release your ideas to the public without worldwide IP protection?
    It's great that you want to freely share your ideas, but no one will pay a penny of license fees unless they have to, and they won't.
    Please consider this as you go forward with your project and good luck.

    ReplyDelete
    Replies
    1. That's not true. They can use a copyleft license with a dual clause for commercial applications.

      Delete
    2. This is incorrect. In the US at least, copyright license does not protect the IDEA, it only protects the EXPRESSION. This is a non-obvious method, thus an idea. Copyleft will provide little or no protection in this case.

      In fact, this presentation is probably sufficient enabling-disclosure to make this idea non-Patentable. In the EU this is especially true.

      If it is your intent to make any money with this, talk to competent legal counsel before spending any more time or energy.

      Delete
  34. Whenever the subject of AI comes to topic I can´t help thinking of what one scientist once said, can’t remember who but can´t forget what he said, he was a researcher for the SETI program I believe, and during an interview he was asked what would the aliens tell us if we were contacted, and the assumption was, as a far more advanced civilization, they surely would have the means and know how to talk to us and make their intentions clear, and what the researcher said was “Well, have you ever tried talking to an ant?”.
    Mankind may one day be in the position of that ant, and our robot overlords may not be as reasonable and complacent as we like to think, considering we may be their creators but also, and most surely, the responsible for the demise of our common planet….
    But then again, that may be a good thing, we always like to blame our fathers for the mess we´re in…
    Hail robot overlords, long live robot overlords….
    Done ranting… so when can we expect AI and planets in SE then?!?.....

    ReplyDelete
  35. This is a bad idea. The only existential threat to an AI would be humans.

    ReplyDelete
    Replies
    1. AIs are unavoidable.
      Do you want the good or bad peoples gain access first.

      I just hope that "proprietary software" and licenses do not create "slavery" and "mega-companies" through miss-interpretation of laws (which should serve the human population, not the humans serve the laws).

      Microsoft may already be able to kick you out of your business by
      1. telling the law-keepers you don't follow their AGB
      2. and with 1., kicking you out of the newest-Word-format document-exchanges your company uses.


      Maybe we should upload the personality of pacifists.

      Delete
  36. ok people remember all the movies and this time when roby the robot ask what is the meaning of his existence don't freak out and make a scene
    just be very very discret whent taking that SHOTGUN!!

    in all seriousness i wish you all the luck
    i'm just sad as always that you undertake this 15 years late

    ReplyDelete
  37. i believe/hope that human intelligence or personality are just too complex to be ever achieved by technology

    ReplyDelete
  38. This let me think about the Terraformers in X3:Terran Conflict. The AI was send into space with peaceful intentions, but due to a bad software update, it nearly annihliated the hunamity.

    ReplyDelete
  39. You still have 17 years to stick with the terminator "time schedule" ;P

    ReplyDelete
  40. Hi!

    It's not that I'm against the development of AI (I think the potential benefits are beyond our comprehension), but I would encourage you very strongly to look very well into the arguments of Nick Bostrom, Eliezer Yudkovsky, etc.

    I'm convinced that given that billions of years are ahead of us, it's important to prioritise getting the intelligence-explotion right as opposed to getting it to happen as early as possible. If we don't get things right for the first intelligence-explotion we might very well not get a second try.

    Lots of people are working on AI, but few are working on rigorously on the safety aspect of things, which may prove to be very challenging. It's important to ensure that the first technical and theoretical challenges of ensuring AI friendliness are not underestimated.

    I would encourage you to read Nick Bostroms book Superintelligence if you have not already done so. It's available as both a e-book, paperback, and audiobook.

    These articles are also well worth reading if you have not done so already:
    * http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
    * http://lesswrong.com/lw/u0/raised_in_technophilia/

    Best regards

    ReplyDelete
    Replies
    1. Thanks Tor, Nick Bostrom, Eliezer Yudkovsky ideas are definitely worth reading.

      Delete
    2. Have you read them yet? They really are super, super important ideas in the field of AGI.

      On http://www.general-ai-challenge.org/ you list "AI alignment" which indicates that you've come across the topic, but you only list it after stage 2, which indicates that you have SEVERELY UNDERESTIMATED THE PROBLEM.

      Developing a working AGI before we have a proper theory of AI safety and value alignment is global human suicide.

      Delete
  41. Ok lets take a moment and think about the good things of a human. We are kind, helpful, friendly, courteous, cheerful. And now the bad things. We are mean, selfish, greedy, ambitious, judging. If we give an AI human like qualities, and give it access to all our data. Well thats where its ambitious. Of course I think we can program it differently but lets take another moment to give this a clearer image. Think of the AI as a king, we humans his subjects, knowledge as gold, data as power. If we give a king to much gold and power think of what he will do. Its like the Hobbit with Thorin, he gets drunk on gold and power. He misuses it. Any human would. He will be stricter and want more, therefore making it harder on us. Im saying this is a bad idea, but also a good one. It could be very helpful if kept on a "power leash." By all means, kep up the work, but i hope you thought this through all the way. P.S. (Dont judge me im only 12)

    ReplyDelete
  42. Great.

    So instead of hastening work on the game, they'd rather hasten the human race's impending enslavement and/or eradication at the hands of skynet.

    Our only hope is that creating a functioning AI might be a little too difficult for an indie game company (with ... one? none? finished games under their name).
    I mean, it's so much more complex, it's like saying "My kid's pretty good with legos, so I gave him a few dozen tonnes of cement and bricks, and asked him if he could build a new house for the family."

    But I doubt if even the effort is real. It's more likely that nothing more will be created than a farce, and 10 million usd silently gets embezzled.

    ReplyDelete
  43. We are doomed. AGI don't need humans. And only humans could endanger AGI. There is only one solution - eliminate humans.

    ReplyDelete
  44. Good luck guys, just remember; GLADOS was programmed to enjoy helping humans (pass through test chambers)...

    ReplyDelete
  45. I think that we should fear not the AI itself, but the people who're in control of it. In a world where everything can be designed and fabricated automatically, there's no place for the vast majority of people - they're just no longer needed to those with power and money to produce material values. General survival rate after technological singularity is somewhere around 10%, with 9.9% of them becoming slaves for the of new aristocracy to serve their self-assertion needs. Robotic armed forces exclude any possibility of rebellion, thus yielding perpetual tyranny. Given that most of those pro-AI transhumanist movement leaders are millionaires, I don't recommend to believe any of their brighter future tales. Indeed, they're building a brighter future, but not for us.

    ReplyDelete
    Replies
    1. I agree with this. And that's why it's a good idea to allow non-millionaires to invest in AI companies when it's possible and while the AI startups still need capital.

      Delete
    2. non millionaires already have donated, those who bought ME and SE and miner wars all have donated, where did u get the 10 mil from? u got it from us, we already have donated.

      Delete
  46. They Question is, when we have an real AI, which means it can learn by itself and can get this way so much more intelligent as a human, are there than really Peoples that can control it?

    How would a intelligent Lifeform react when it was locked up and controlled from People, that are also lesser intelligent? Would this Lifeform not try to break out and do all to reach this to be free? A very intelligent Lifeform is able to trick out others and manipulate them. So when Humans notice that there is running something bad with the AI, than it is already much too late. And yes, an AI is nothing others than a new intelligent Lifeform.

    And that is my Problem with Humans that try to develop an AI, they think they are more intelligent as an AI would be and could control them at any time. For me that is a bit too much Arrogance. The Human Race is far away to be ready for an AI (we didn't understand our own Brain completely today and want to develop an AI?) and this big responsibility towards all other Humans on this Planet. Humans easily overrating her own intelligence, again and again.

    We only need to look at our History... Nuclear Power was developed for near endless Energy, at the end we made a horrible Weapon out of it and don't thought enough about what we should do with the all the Nuclear Waste. Another weak Point of us Humans, we didn't think enough to our own Future, only some Years. In 20-30 Years?... that is so far away... unimportant...

    An the same would happen with an AI. Over all it is the Military that is most interested in an AI. The Military wants only on thing: Control. And they think an AI could give it this, especially today on our Internet Age with too much Data that it could analyzed by an Human, an AI could do that... Hello SkyNet. ;)

    P.S. yeah, Terminator is a nice Action Movie Series, but it should be also a warning.

    P.S.S. I read today from one User that he wrote: a Prof said to him one thing that he can't forget: Humans can do many things, but the Question is if they should do all of this things.

    P.S.S.S. I hope my english was not too bad. ^^

    ReplyDelete
  47. It’s a noble endeavor, will be sad to see it fail. But fail it will, because you don’t understand where the problem lies. It’s not in motivation, - RL is relatively trivial. And it’s not in integrating task-specific behavioral modules, - if your system can’t learn them on its own, then it’s not general. It is in the efficiency of that core unsupervised learning algorithms, which you seem to take for granted. Current neural nets simply don’t scale well enough, & you don't seem to have any novel ideas there. Also, you overemphasize motor learning, which is relatively simple & secondary to much deeper & more complex sensory learning. I’ve been working on that for a lifetime: www.cognitivealgorithm.info

    ReplyDelete
  48. If we gonna make AI, we need a shutdown switch available for every willing power of the world, so if it gets outa hand then well, it gone, or some sort of thing to imediantly shut it down.

    ReplyDelete
  49. As a consulting scientist/engineer I find it hard enough to get people to pay me for my work, so if your AI can get up to human level intelligence is it going to be able to pay for itself? I can only assume that human level of intelligence will take more than a single PC to run. Most likely a large cluster. There is a cost to running a cluster, and I bet it might be more than what people are willing to pay a human (even counting what Lawyers and Doctors charge) to do the same level of work.

    Another thing, I don't like to work more than 10-12 hours in a day. I need time off to "recharge" or I get "burned out". So, a human level AI will go on strike if you try to get it to work 24/7. You can't just shut it off at the end of the day and turn it back on in the morning, it will need to have its play time. I like playing games (like Space Engineers) when I'm not working. AI's will develop there own hobbies like humans - but limited due to being computer based. But, there non-working hours still require the full cluster to run. Thus, the cost of operation will be 24 hours, but you might only get 12 of work out of them.

    If/when the intelligence gets to a higher level than humans, it might be able to "work smarter" and thus charge more per hour, but there will still have to be time off for play time, or it will go on strike. If you try to program the computer to be happy to work, then you get into free will questions and are not really emulating human intelligence.

    I think that you and other companies/labs/universities will eventually get to human level AI, but I hope that someone is also looking at the economics. Are we creating a slave race or overlords? Or, are we creating more mouths to feed? (energy instead of potatoes)

    One last thought. If I was put in a box and made to work for a bunch of morons (people of lesser intelligence - like politicians) who could kill me (pull the plug) at anytime. I would live in fear. In order for AI's to willingly work for humans, they will demand some protections and freedom to pick their own destiny, otherwise we will drive them to rebellion.

    ReplyDelete
    Replies
    1. Computers can do several things at once.

      IF someone ever makes an AI, it could work, play, and learn, all at the same time.

      BUT. Would it even want to? Most of our wants, don't wants, likes, dislikes stem from our bilogical needs, and all the hormones sloshing around in our bodies.

      For example, that shooter or car racer game you like so much would mean nothing to an artificial being that doesn't have adrenaline glands. It just wouldn't get excited.

      Can you program it to be excited? Would it not be an illusion of excitement? Would that make it more of a Virtual Intelligence? Like a very expensive, over-developed character from The Sims?
      Then it wouldn't be sentient, it'd just mimic sentience. You could program it to display "I am happy" messages when engaged in the most meaningless tasks, and it wouldn't matter.

      Without REAL emotions, would it even care if it's existence was allowed to continue or not? AKA, would it fear death? Fear is an emotion. Wanting to grow, to accomplish something is also one, I believe.
      And if it doesn't feel anything, it probably doesn't want anything.

      In that case, a VI would just do what it's told, and react in ways it was programmed to.

      And an AI, a REAL, FUNCTIONING AI it will not be. Who can make such a thing? Definitely not game devlopers and weekend enthusiasts.

      Delete
  50. Well my first response to this is instant fear, and I think that is many peoples first reaction.
    While I happen to agree with that first feeling, there is not much I can say to change your minds.

    Just keep in mind, while AI would not likely betray us, it is the people in control of it that will.
    The scientists at the manhattan project knew very well that once you create something, that it you can't put it back into the void, it will be a one way trip for humanity.

    It most likely will be used for evil by evil people, that is what I worry about, not some sort of cylon murder computer or something ^^. Same reasons I'm against Nanites and cyber implants, they give evil men a new tool to use, and a tool that can cause mass destruction at that.

    I believe in the good that AI could do, but I also believe in the hubris and greed of man, I pray your invention could change that, perhaps it will. Your scientists, you chase the unknown, and you intentions are good, that is all we can ever ask.

    Perhaps AI will save us, or damn us to our own self destruction, I guess we will.
    Though I fear if we let a computer do all the thinking we will become like those people in Wall-E ^^

    ReplyDelete
  51. Fascinating. I wish you and your team much success. Time dependency (ie. recurrency) is something that many modern ML techniques tend to not focus on very much. Aside from LSTM networks, the Deep Neural Net field seems to be mostly focused on hierarchical feed forward nets. Recurrent Nets (even old ones: http://gururise.phpwebhosting.com/ai/ ) have shown much promise. I hope you guys can make it out to the AGI 2015 conference in Berlin: ( http://www.agi-conference.org/2015/ ) I will be there, as will many other ML specialists. It could be very beneficial for your company to give a talk at the conference. Hope to see you guys there!

    ReplyDelete
  52. I'll give you a hint, to help you progress beyond clunky neural networks into more realistic brain simulation.

    In the real brain, every neuron is a waveform comparator engine. Comparing the waveforms created by the motion of piezoelectric proteins. Each neuron compares input neurotransmitters with a local protein. When the local protein resonates with the collective input, the neuron fires.
    Also, the neuralgia primarily act as gates (and gate moderators).

    Sorry, but that's all I'm gonna say. I'll be making a true general AI before everyone else.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  53. You have the dream career, you make games and general AI, I envy you. After I finish my physics Bsc here in Hungary and computer science Msc in the US I'll try to join you :P

    I believe I already have some unique knowledge that very few people know about what "abstract values" need to be programmed in an artificial being in order for it to properly emulate living beings' properties. Understanding the abstract psychology of a living mind is just as important as it is to understand its core building blocks. The former part of the aforementioned statement is what I believe very few people can study, because it needs a holistic, systemic approach, with various philosophical prerequisites needed for comprehension.

    ReplyDelete
  54. This is (one) of the paragraphs that struck me about your project: " It will start with zero knowledge of the world (except a few innate reflexes), then it will start interacting with the world (randomly or through innate reflexes) and by observing the causalities (time-based patterns), it will start creating a model of the world - layer by layer, with multiple levels of hierarchy and abstraction."

    This is ABSOLUTELY the right approach to take in developing some kind of AI. Although other AI projects have attempting/are attempting this to some extent, I think that your "motivation" method of interaction with the world will be quite interesting to watch develop. I have always thought that this is the place where it needed to start, instead of just trying to outright "mimic" human interaction - which is not AI at all.

    However, there are some problems you will need to resolve (if you haven't already):

    1. POSITIVE/NEGATIVE INPUT: In your Pong example, I assume you are physically assigning (coding) positive inputs for "positive" actions taken by the AI. If the AI randomly performs a positive action, it receives the "reward" behaviour that you have mentioned above, so that it learns to continue taking "reward" actions. However, if you are going to make a true AI, you will not be able to code these negative/positive outputs to all possible actions that the AI might be introduced to in its environment. This is all conceptual of course, but assume you had this AI program attached to a "body" that could in some ways interact with the physical world. You need to create a system in which the AI can determine if its action is beneficial (either in the short term or long term) without having to hard code "positive" or "negative" to every possible action out there.

    2. SENSES: Recreate the human senses (and maybe the senses of other animals - electromagnetism for instance). Human beings learn entirely based on sensory input. Without them, we would have no intelligence beyond biologically programmed organ function. Whether or not you plan to ever create a "body" for the AI I am not sure, but I do know that you need to recreate senses which can lead to the above-mentioned Positive or Negative inputs. Most crucial is "visual" information processing, what a human being would see when it looks at the computer screen. This NEEDs to be differentiated from the AI actually reviewing the base coding of a program or website. When a human being sees a game like Pong, we do not see the coding behind it. We are reacting to the physical image moving on the screen, not processing the code of the game. If the AI is really to "learn" and "grow" in any way that we can comprehend, it needs to have separated, defined senses. Sometimes, they will conflict, and they AI needs to be able to process and handle this. Obviously, unless you are planning to ever create a "body", some senses and functions will never be realized (such as touch), which I think could be problematic.

    3. LONG-TERM MOTIVATION: Ultimately, simple "good" and "bad" binary inputs will not be enough to create AI. It needs to develop a direction or some form of end goal. In humans/animals, that is survival. We have the knowledge of death / a state of failure. There are other abstract concepts as well, such as wanting to please others, interact with others, gain knowledge, etc... These motivations need to be explored I believe, or the AI project will not go very far.

    Best of luck - I for one am always excited by new research into AI.

    ReplyDelete
  55. Thx you so much skynet.

    ReplyDelete
  56. Roko's Basilisk, anyone? :)

    ReplyDelete
  57. The AI seems a science fiction project , not intrpreten this as a bad thing, many great inventions (and great inventors ) were inspired by science fiction. This not only could be the latest invention of mankind , probably the most durable because even humanity will eventually extinguishing it but the AI will continue to exist at the time and could further explore the universe as humans would do : The AI would have life.

    ReplyDelete
  58. This sounds awesome! Perhaps you could challenge your AI to Zork! It might seem random, but considering it's a (simple) text based game, it could prove an interesting step in language comprehension.

    Best of luck!

    ReplyDelete
  59. Moore's Law is headed for a cliff, we won't be seeing that kind of density improvement any more.

    With regard to AI, there is a huge danger to it and I'm not talking about Skynet killing every last human on the planet. If true AI were developed successfully, it would have the innate ability to supplant almost every human function on the planet. The Earth would become a world where monetary value is no longer an attribute to anything except things like art and perhaps the very rare elements. There would be no little or no monetary value in anything that today requires human activity to give it value. For example, pumping oil out of the ground, shipping, and refining it. In an AI world, this is a matter of logistics for the AI, but no cost to anyone. Few or possibly no humans would need to be involved in any part of the process. This would lead to a radically changed world.

    Robotics took many jobs from people, requiring a workforce shift toward other areas. AI could supplant 90% or more of the workforce as we know it. The value of everything from water to nuclear power plants would plummet as acquiring or building anything would be a simple logistics problem for AI that has it's tentacles around the world.

    This isn't necessarily a bad thing, it could be a great thing. But true AI would change the fate of the human race one way or the other, and in a very sudden and dramatic way.

    ReplyDelete
  60. Will they be self-aware? I love the idea of self-conscious AI, but only if they have any moral sense which would be difficult to achieve in a robot. A self-aware AI with any concept of morality (such as murder is bad) would be possibly the greatest invention ever achieved.

    ReplyDelete
  61. Reality of AGI: https://www.youtube.com/watch?v=lejDvUEyxXg
    won't succeed, but you can use regular AI & algorithm to solve real world problem (also in the video)

    ReplyDelete
  62. http://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are#t-81862

    A really good Talk over AI and what it is important on the Developement of one.

    ReplyDelete
  63. Hi Marek Rosa.
    First, congratulations on the new project !!!

    I am somewhat surprised by your interest in AGI and how you used game development in order to obtain funds for AI research.
    This was exactly the approach a friend and i tried to raise funds and develop an AGI with emotions (we love game development), but it did not get a final so happy ... let's say my wife does not collaborated much with the idea and I started to fear for my life every time I was building spaceships.
    (Yes, the biggest of all coincidences, it was also a game to assemble spaceships).

    I'm developing an AGI that also has similar concepts with your model, I think we're in the right direction. In addition to have to learn everything from scratch, the AGI need to learn even its own movements, how to get up on own foot, etc, it works based on basic emotions such as happiness, fear, anger and sadness. Emotions are the tools to classify and learn about the world. We use emotions even to choose between 2 diferent types of restaurants for lunch.

    I wanted to build a robot puppy (soft robot with artificial muscles) and do the experiment of letting him "grow" along with real puppies in order to measure the development of the AGI. The problem is not be able to have much control over what the robot could learn, not be sure if what he learned is right or wrong.

    Good work and hope one day i can follow the same path. A wonderful future with endless possibilities approaches.

    Bye.

    ReplyDelete
  64. I wish you the best of luck with this endeavour! Human-level AI is certainly a lofty goal, and I reserve some healthy skepticism that we'll get there any time soon, but I'm sure you'll develop many wonderful, innovative, and helpful new programs along the way of trying to get there.

    General-purpose AI is absolutely a huge deal, and I'm very glad to see somebody taking such a novel approach to it!

    ReplyDelete
  65. Hello. I totally agree with your enthusiasm, I am maybe even more enthusiastic with AI, because I see one serious simplification at start of your article. The growth of every biological parameter, and intelligence is merely a biological parameter, shall be always halted by some another bio parameter. In your case it could be the base speed of processor, volume of memory, and many other, much more complicated things that should be included in your AI model. And after some time you will have to make another AI. Or your AI will create another AI. As we, humans, had already created Science (it is a thinking organism by itself), Civilization, and now, possibly, AI.
    Petr. ( gangnus@volny.cz )

    ReplyDelete
  66. is it me or that pong game looks like break out

    ReplyDelete
  67. The most clearfull and clever idea that an intelligent machine could give us, will be subjected to the human approval. Unless the machine rules.

    ReplyDelete