MiniNeoRL

Hello,

I have recently made a small Python port of my GPU NeoRL library. It doesn’t have the same features, so the most important differences are listed here:

  • It is fully connected (not sparsely connected)
  • It uses a new method for organizing temporal data (predictive coding)
  • It is slower
  • It is much easier to understand!

I call this port MiniNeoRL, since it serves mostly to help me prototype new algorithms and to explain the algorithms to others. Now I am not exactly a Python expert, but I think the code is simple enough such that with some explanation it should be easy to understand.

Along with MiniNeoRL I have made this slideshow that serves as a brief overview of what NeoRL is and how it works:

NeoRL_presentation

Until next time!

2 thoughts on “MiniNeoRL”

  1. Hello again. The first part of the GVGAI has been done for a while, but I’ve decided that it is time I go into poker for the time being rather than try testing NNs on real time games.

    I had the intention of testing (Mini)NeoRL when I was done with GVGAI-Fsharp, but it seems this will take a while. Time is money and duty calls.

    Out of laziness, there is something I would like to request in regard to NeoRL – test it on pixel-by-pixel Mnist. On that problem LSTMs can achieve nearly 99% accuracy which is quite remarkable.

    I know you wrote that NeoRL is not a classifier, but human intuition in regards gauging the optimizational power of an algorithm is pretty useless, and the few demos you showed have been inconclusive. Having hard numbers is an absolute necessity to know where one stands.

    Real time Atari games and such are quite difficult, not just for the net, but for the programmer as well. It is partly due to my inexperience as a programmer, but it took me well over a month to port a chunk of the library I was working on. Quite a significant expenditure of time.

    In contrast setting up a pixel-by-pixel experiment on Mnist would be trivial for you. I would be curious to know what the result would turn out to be.

    In regards to recurrent nets, LSTMs are a bit of an aberration of nature given that they work very well and yet are not sparse. Of the top of my head, convolutional multiply, relus, maxout, WTA, in the feedforward case, each of them significantly eases the optimization burden compared to the non-sparse sigmoid and tanh functions and fully connected multiplication.

    I would bet that there is probably a way to do sparse recurrent nets properly, though I have no idea what combination of constraints and activations would be necessary for it.

    Due to ignorance, I can really only go with what I have.

    Back to the issue at hand, compared to poker (and board games), pixel-by-pixel Mnist would be trivial problem to do. Please do so. Thanks.

Leave a Reply to abstractcontrol Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.