Counter-Strike’s Dust II runs purely within a neural network on an RTX 3090 — performance is disappointing at only 10 FPS
https://www.tomshardware.com/video-games/pc-gaming/counter-strikes-dust-ii-runs-purely-within-a-neural-network-on-an-rtx-3090-performance-is-disappointing-at-only-10-fps
3 Comments
I like how from the article it says “ever wanted to play counter strike in a neural network?”… No.
This is interesting, but mostly because it continues to demonstrate that using neural networks for generative tasks is a pretty terrible fit. An “AI” program can be good for *associative* tasks; comparing images, identifying objects, that sort of thing, and it can be good for *extrapolation* within certain contexts where a load of data is available already, like DLSS… but anything *creative* or requiring *understanding* or *thought* is simply impossible to derive mathematically. This program has ingested enough video of Counter-Strike to be able to predict how videos of Counter-Strike will look, within certain highly restrictive parameters, but you’ll never be able to feed it sufficient data for it to *know anything* about Counter-Strike.
From my position, it really looks like a lot of academics are very excited about a novel way to compress arbitrary types of data in an extremely lossy format. Like if you could use H.264 to encode multiple videos as one gigantic meta-file that you could navigate three-dimensionally, losing all the detail and meaning so you could blend seamlessly from one video to another.
with better hardware and software this could be crazy. Just download an AI model that’s been trained on multiple games and be able to play some kind of super game where you can morph from counter strike to minecraft to left 4 dead. Shouldn’t be too hard to add the ability to chat with it so you can tell it to change things on the fly.