My main work at @ArayaPress, as part of the @moonshot_IoB, is to, essentially, control robots with my mind. If that sounds like an awesome goal, I'm hoping to hire multidisplinary researchers to build this
The project is challenging, but I believe meaningful, and can be solved with a combination of research and engineering. Let's make science-fiction science-reality 🚀
, is to, essentially, control robots with my mind. If that sounds like an awesome goal, I'm hoping to hire multidisplinary researchers to build this 🧠🤖
Hi Neuro Twitter, what's the most impressive experiment you know to date where a human learned to send signals via non-invasive brain measurements?
Do we have an idea of what maximum intentional bit rate could be achieved given enough training?
Eager mode was what made PyTorch successful. So why did we feel the need to depart from eager mode in PyTorch 2.0?
Answer: it's the damn hardware!
Let's tell a story about how the assumptions PyTorch were based off of became untrue, and why PyTorch needed to evolve. (1/10)
Did you know, that you can build a virtual machine inside ChatGPT? And that you can use this machine to create files, program and even browse the internet?
Any nice solutions for teleoperating robots (for manipulation) using VR hardware? Hardware suggestions? Any software that makes it easier, or does everyone really have to roll their own?
Our new paper with Patrick Copinger is out🎉
We show that consideration of a topological Berry phase inspired local momentum phase coupled to electromagnetism gives rise to a momentum gauge dependent emergent spacetime.
Yesterday I had the pleasure to give a talk at 'Consciousness Club Tokyo' on mental time travel, conscious function, and what they might contribute to future AI development. Thanks again for inviting me,
Tired of tuning your neural network optimizer? Wish there was an optimizer that just worked? We’re excited to release VeLO , the first hyperparameter-free learned optimizer that outperforms hand-designed optimizers on real-world problems: http://velo-code.github.io
for being the place to go to get pretrained models, wrapped up in simple APIs. Wanted to test out some new computer vision capabilities, and instead of spending ages figuring out people's repos, I'm up and running in a few minutes!
Neat work that shows how many RL problems (including dynamics prediction) can be turned into inference over conditioned sequence modelling tasks. Merging this with the algorithm I presented for Generalised UDRL would be even more general-purpose 🥳
You know how we train general language models by randomly masking parts of the input?
We think this makes even more sense for training a single _general sequential decision model_ that can perform behavior cloning, offline-RL, goal-conditioning & more!
http://arxiv.org/abs/2204.13326
Very exciting. LLMs enable language interfaces for RL, and now code-LLMs enable symbolic policies.
Prediction: high-level embodied policies will enable self-experimentation (ideally via meta-learning) for understanding the physical world, for further improvements in capabilities.
How can robots perform a wide variety of novel tasks from natural language?
Execited to present Code as Policies - using language models to directly write robot policy code from language instructions.
See paper, colabs, blog, and demos at http://code-as-policies.github.io
long
It hasn't even been a week since its release, and I have already come across a few amazing (non/refined by users) spaceships generated with the AI Spaceship Generator I worked on with
Don't miss our next live stream
𝐃𝐞𝐯𝐬 𝐥𝐨𝐬𝐭 𝐢𝐧 𝐬𝐩𝐚𝐜𝐞 Thursday, Oct. 27th, 5 - PM UTC
Twitch: https://twitch.tv/keencommunitynetwork…YouTube: https://youtube.com/user/SpaceEngineersGame…#SpaceEngineers#Innovation#Space#Science#Sandbox#Xbox#NeedToCreate
For more details, check the paper. And if you'd like to try it out, or even build upon this research, all of the code from the Space Engineers PCG project is open source 🚀
PLEs combine methods from EAs, rec sys, pref learning, RL and ML in general. Seems like a lot, but the goal was a general, flexible framework. Make use of your favourite ML model (linear? NN? GP?), RL sampling strategy (ε-greedy? UCB?), etc.
Manually selecting items to evolve at every iteration may be slow and tiresome, so why not get the AI to do that for you too? So we speed up the mixed-initiative setting via preference learning, using a learned model to imitate human selections for some iterations.
Building upon our earlier work on a hybrid constrained optimisation EA for generating functional vehicles, and an improved version with surrogate fitness models, we now go to the interactive PCG setting, with an app for players to use to generate spaceships.
Today we're launching the Farama Foundation, a new nonprofit dedicated to open source reinforcement learning, and we're beginning by maintaining and standardizing all the major open source reinforcement learning environments. Read more here: https://farama.org/Announcing-The-Farama-Foundation…
Having worked in RL with video game testbeds and moving to robotics and neuroscience domains, where previously domain knowledge was key, it feels like collecting more data is the way things are going. Trumps everything else. More data + simple objective + good model architecture.
Open problems in RL research thread (infrastructure, methodological, philosophical, ...).
Why does it feel like the field is expanding horizontally (more research, more general progress) without breakthroughs in academic (using the same 3+ year old algorithms)?
Add some⬇️
Just discovered mamba as a faster drop-in replacement for conda, and the improved dependency resolution and parallel package downloads are already saving me a lot of time with complex envs 🐍💘
. In IGT, consciousness sits on top of a generative model. This could be episodic memory, or more broadly, what the authors call the "conscious memory system".
talked of episodic memory as more of a generative model than a memory store, and this paper runs quite strongly on this perspective of episodic memory.
Interesting read - a relatively broad theory of consciousness that tries to explain many of the phenomena and possibly unintuitive findings on consciousness from a vast swathe of research. It evolved as a way to use episodic memory.
AI day is a perfect example of Deep Learning at its finest. Mix and match all the greatest innovations to do something drastic and super ambitious. Congrats!