#ai-dev
2465 messages · Page 3 of 3 (latest)
i feel like im close, the hardest systems i have already designed and implemented sucessfully, a few "cant be done" things lol
Currently upgrading to work with 3d lol
Not gamedev related, but I just managed to complete a very tedious task using an agent, which would have taken me at least couple of hours to finish:
It only took a few minutes to write the prompt and the result seems to be ok (I'm still validating it).
OpenAI says GPT 5.2 helped scientists discover soemthing new in theoretical physics, it found that a partical interaction people thought could not happen maybe its a special case, but the result is published as preprint and still needs revieve
its currently unconfirmed in the scientific sense yet
Chat is this Ai?
So uhm, I created a platform where you can configure an AI and github access. From there you can chat with AI, explore a git and create tasks of what you want to do. Then you can ask the AI to spin up containers wherein AI will implement the actual tasks and create a PR. You can give it feedback, sometimes it can ask u questions, etc. With every PR it also automatically give you a preview url where you can see the app in the web. All secured per user (even the preview urls), AIs nicely contained and safe and no code is pushed without human reviews. Devops is all done by the platform, fronted/api/databases/links/plugins/configuration/env-vars/secerts/dns, you name it, all handled (and/or configurable).
With this platform you can pretty much run a dev team.
Sounds useful. But don't many MCPs exist for similar tasks? What does your platform do differently from them? (When I searched for "github" on mcpmarket.com, it returned 5988 results.)
And how modular is it? Many dev teams mix and match different platforms/tools for Devops. So, it'd be better if it supports different platforms other than Github.
My company uses Jira + self-hosted Gitlab + Jenkins + ArgosCD + K8S on Naver Cloud. But the previous company used Jira + Bitbucket + EKS on AWS, for example.
I still think such a platform could be useful. But I'd be interested more if you can provide some information about how it's different from the rest. Maybe some Github project page for your project?
What?
I think my project isn't a good candidate for using a code agent, but I decided to give it a try.
As a first step, I'm going to ask an agent for a code review of the current codebase:
It did a fairly extensive code review, but I feel some of it was rather superficial. The conclusion it reached was more or less spot-on, however:
(Except for 2 and 4, which are due to misunderstanding of the architecture.)
Probably I better focus on 1 for now, and have a more in-depth discussion with AI.
I don't know how I can standardise between Eff and IO - I don't think it's even possible. So I'll skip it for now.
As for 5, it's more or less due to my laziness, but I found this project that I can use for that purpose the other day:
https://github.com/chickensoft-games/GodotNodeInterfaces
It's more of a programming discussion than that about AI, but I think it can be an interesting case of using an AI agent for a non-trivial refactoring & code reviewing process.
The AI understood the core issue I explained perfectly, but I think it didn't get the "static constructor" idea as well.
It understood what I meant by a "static constructor" correctly (which was impressive, since it's not a commonly used term), but I don't think its reasoning is sound.
Still doing the code reviewing session with the AI:
What I'm trying to make is the following:
- Allow users to code with AI in the cloud with their git repo (think claude code GUI with any model in your browser)
- Provide quick previews (URL accessible builds) of PRs
- Provide environments like DEV and PRD with domains set up
- Provide a Kanban like workflow and overview of tasks
- By standardizing most of Devops through a giant complex yaml file that supports everything from VMs, docker, k8s and more. (which an AI can easily create)
- By integrating most planning tools (git, jira, etc) with webhooks to update the tasks overview
- If everything works well together, I've a product that essentially provides a small team of AI software devs that can ship a product from scratch to PRD (targeted for small companies). And if needed, a dev can jump in anywhere in the flow to have a human in the loop.
Interesting... but what benefits you expect from using your platform instead of, say, some agent orchestration tool (e.g. Oh My OpenCode) with (MCP) tools for integrating existing infra (e.g. Jira, BitBucket, AWS, etc.)?
The benefit for you is that it's all in one place, which is only a small leverage
But the benefit for small companies, especially small non-tech companies, this can be huge
You're a software dev, if this product is trying to do most of your job while you can do it yourself, it won't be of much benefit for you
I guess, some small companies building a DevOps infra from scratch might be interested in using such an one-stop solution. Still, I'd feel concerned about vendor lockin.
But yeah, maybe it could be because I'm looking at it from a software dev's point of view, instead of from that of some non-tech oriented person searching for a carefree way of deploying their service.
Anyway, good luck with your product 🙂
Yeah, I don't vendor lock though (it would be smart, but I don't). You've access to your git at any time and can do whatever you want with it. You don't even have to use my env deployments if you just want PRs and previews.
By vendor lock in, I meant that there are a lot of choices for each component in DevOps infra, like AWS vs Azure vs GCP, for example.
And you can still use those
as the yaml file supports external services
The yaml file can have a record of external service of AWS S3 buckets which requires Config X and Secret Y. The platform sees this and provides config and secret options to be filled in for each env.
Well, I defer my judgement until I see it in action (or at least documentation) then. 😅
Fair, just hope competition won't kill me in the short run
so you're making an ai-first PaaS
doesn't Vercel have AI features though?
Vercel does, and very good ones. But on techstack it's limited to NextJS and postgres, which is very icky to me.
Something as simple as, "Make me an excel tool to automatically do this with AI" (a very common business ask right now), it would probably get stuck on what to do with the AI part or implement it very badly.
Hi 👋 ,
Repost of https://www.linkedin.com/posts/mathieun_machine-learning-roundtables-gdc26-activity-7431604766406967297-QXyj, feel free to send a connection request.
As part of the GDC2026 Machine Learning Summit, I am co-organizing two roundtables.
To make them practical, lively, and grounded in what actually shipped, we put together this small survey: https://forms.gle/jPxvfh6Xw6z6Hza26. You can contribute to shape the discussion and receive a written summary of the roundtables even if you don't attend GDC.
RT1 - AI/ML Runtime Techniques (rendering, compression/decompression, behaviour, animation, bot, …) is Tuesday, March 10, 2026 at 4:30 PM (PT). This is about what made it on screen, what survived real constraints, and what got killed along the way. Not theory. Real systems in real games.
RT2 - ML in Games Development and Operations (tools, genAI, vibe coding, GaaS, DevOps) is Thursday, March 12, 2026 at 1:50 PM (PT). This is about what actually stuck in practice over the past year, what collapsed at scale, what changed how teams ship, and what got quietly rolled back.
I tried OpenClaw yesterday and I was seriously impressed, and that was even with so many bugs and missing documentation/features that I had to fight against to get it working.
As a test-drive, I created an agent which can serve as my "friend" on Discord with shared interest on dark fantasies I have.
I gave it a role to chat with me on this theme and build a lorebook from what we talk. Whenever the agent find something interesting, it creates a Markdown file for the subject, and periodically edit and reorganise the file structure to make it look like a local Wiki.
It worked like a charm, and I really enjoyed talking about the topic which was a bonus. I'll probably add things like a custom command to export the current lorebook to a PDF and post on Discord, and vibecode some tool to export it to SillyTavern's world info format.
If it can work so well for such an unusual task like that, I can imagine how it can be made useful for many other more practical purposes.
I'll probably create another agent which can monitor Gitlab for MRs and notify me after doing a code summary and preliminary review.
I'd just add a review bot normally, but the new company is using a free self-hosted version of Gitlab, which is managed by a different team.
I usually do 3 or so code reviews a day, which can be time consuming when they are not trivial changes.
Added another agent and successfully made it checks for new merge requests that need my approval every 30 minutes during work hours/weekdays.
I gave the agent a personality, so now I'm working in a roleplay setting. 😅
<@&133522354419662848> <@&1224208037892591666>
I've decided to scale down my idea of creating a AI devops platform to targeting small non tech businesses
Now I've a platform where someone can ask an AI to build a website. You start a project, it provides a basic first index.html page. You go to your website, only for the admin (the website creator) there is an AI chat tool to talk with. It has the ability to write full html web apps, it shows the changes right on the website for you. You can select html elements to guide it more specifically if you want. You can undo/redo/reset/save changes. The platform provides services like Auth, User&Role management, Stripe for billing, Sending mails, linking you domain and storing data per user/website. The AI knows about all these features and can hook it all up himself.
That sounds more concrete and reasonable in scope to me. Good luck with your service. 🙂
I've started like this weekend
And it's already blowing my mind
of how good this works
Can't put this online yet because it's not security tight yet
Here please don't hate too much uch
looks fine
Nice
Yes wanted to ask if they look decent or not I did try to make it consistent in other poses but still beginner
The AI website builder was having trouble with images. Like giving them a name that does match the image content and then used for wrong placements on the website.
So I gave it the ability to describe and understand any image and now it has no problems at all.
Yep, sounds like a issue i had during the making of my own AI as well 😭
but i also fixed it with the same method.
FYI...
Looks like I'll finally have an opportunity to try agent-based dev on a new project.
So far, I've only used coding agents to refactor existing projects. I'll be using this opportunity for establishing the project wide conventions/practices regarding agent use, and see how far I can automate the development in my team.
I'll likely to start with an IntelliJ + OpenCode + OpenRouter setup for now.
I think this channel came from AI behavior trees and simple game agent trainings. Anyhow, we took it over with LLMs
Half way done... I made a system that allows you to generate requirements from a JIRA issue and Figma design layers, and an OpenAPI specification from them. It also creates a design guideline "skill" from associated Confluence pages, so coding agents can use them to work on each project following team conventions.
Now I need to create an agent that can find differences between source Figma designs and the current page under development. I suspect it might be a challenge since the current Figma documents weren't designed to support such automation. Figma provides many such features, but our designer didn't seem to know that.
What is the purpose of the requirements and OpenAPI spec though? Shouldn't those be clear before creating the JIRA issue and Figma designs? Seems backwards ...
Usually things like an OpenAPI spec or detailed requirements come after opening a ticket (JIRA issue, in my case).
And in real projects, it's often the case the development has to begin with merely a few lines of description and a Figma document due to a pressing schedule.
As for OpenAPI, many projects generate the spec from existing code, not vice versa They use things like Java annotations on Spring Boot controllers for that.
It's a valid case, but not desirable if you're going for a more spec-driven approach.
What I'm aiming at is keeping the most high-level artifacts like a JIRA ticket as the "source of truth", and establish an semi-autonoumous, but iterative workflow to generate concrete implementation out of them.
Anyway, I've made so far as to generate a full React/Redux Toolkit/RTK application with full unit tests that look reasonably similar to the design document.
Looks like I'll need a couple of days more to refine the process to make it usable for production.
I won't be able to do any gamedev this weekend because of it, though. 😦 I really hate when I have to work through a weekend, but at least I'll be doing something interesting this time.
But, if the JIRA tickets are the source of truth, they need to be monitored and carefully crafted to have a desired outcome
And JIRA issues from users are usually not a "good" source of truth
In my project, the tickets aren't coming from end users but they are managed internally by the team.
So then the requirements are probably present before creating the issue
Not really. Say, if we decide some new feature is needed, we just open a ticket for it. And we refine it by adding requirements or design documents, using tools like Confluence or Figma.
I believe that's the most common way of managing projects using a task system like JIRA.
Why not just put an AI inbetween or next to your ticket system
Why put the AI behind ur ticket system
To minimise the cost of manually writing specs and code?
but why write the spec after the ticket
Like code by AI sure.
As I said, tickets typically precede detailed requirements, like they do in popular methodologies like Scrum or Kanban.
I would make AI tools that help write/maintain the JIRA ticket AND the spec. And then let code AI figure it out from there.
It's only natural to think of doing something before having a detailed plan for that.
Sure, but if the ticket is the source of truth, it should also be the spec
And should co-exist and both be created by the AI
That's not incompatible with what I said. In fact, they often write specs in Confluence and link them in the JIRA issue.
But typically, tickets come first because they are often created without having a detailed implementation plan. They evolve as people spend time refining the details, and when they become concrete enough to be implemented, they are moved up the backlog, then included in an active sprint.
Yes, so when do you then use it to write spec?
Right before AI dev or just during the ticket creation/refining
In this particular case, I only had a Figma documents with some comments, and a few lines of description in the ticket. For a coding agent to work properly, you need a more concrete plan than that. So, the specs must be written, and it doesn't matter if a human or AI does it.
In that workflow I'm making, there's an OpenCode command to write specs as local Markdown documents, and I instructed it to use all documents linked to the JIRA issue as sources.
It's because it'd be very inefficient and error prone to tell an AI agent to read JIRA/Confluence/Figma everytime.
You can, of course, use AI to write detailed specs on Confluence or even create Figma designs. But it'd much better to have them as local files in a structured format also.
Sure, the markdown artifact is useful, I won't deny that. I'm just saying, it would be even better if the artifact is generated next to the JIRA ticket, not "only" when passing the ticket to AI dev.
It's just a perspective of the UX.
Instead of User > JIRA > SPEC > Dev you would have User > JIRA + SPEC > Dev.
For now, I plan to commit those artifacts along with the prompts and configs for AI.
As I mentioned earlier, tickets usually come before specs in a vague form. It's because tickets are created whenever people come across an idea, like in a meeting. You don't usually write detailed requirement documents on the spot.
It's fine man, my point isn't coming across and I'm over it. Github is releasing this AI flow anyway so whatevs 
Workflows can differ between projects and teams. Although what I mentioned above is the most common practice.
It's not, because you see maintaining and refining a Task in the same step as Dev.
Have you worked in a Scrum, or other Agile project?
Yes I have
I've worked with JIRA, Azure Devops, hell even simple github and trello kanbans
From small to large teams
Then you must be familiar with the concept of a backlog or sprint. When you open a ticket, it goes to the bottom of the backlog by default, because it's usually vague and lack detailed requirements needed for implementing it.
Yes, that's the initial start of a ticket, sure
Then people progressively refine it and move it up the backlog until it becomes detailed enough to be included in a sprint plan.
Yes, we get in meetings, create designs, define features AND write spec over time of the existance of the ticket BEFORE it goes to DEV.
Yeah, "over time" is the keyword here, that is after the ticket, not alongside it.
The ticket stille exists, doesn't it?
Yeah, it does.
So it's not AFTER the ticket, it's DURING
I didn't think you understood me as saying we are doing something like deleting the ticket once requirements are ready.
I didn't think you understood me as saying bro, I don't know what you're saying right now.
What I understood from you is that you're making something that once a ticket is ready for DEV, the flow runs, writes the spec and does the dev with that spec.
Then I guess that was where the confusion came from. We use Confluence where people write requirements for tickets by hands. My system isn't meant to eliminate Confluence entirely. In reality, some tickets have more detailed requirements attached to them than others, and some of them are even entirely missing them. All I did was creating an AI instruction to collect whatever is available at the time of running to compile them into something a coding agent can work with, by filling the holes and reorganising the structure.
Yeah, and I'm arguing, while that's great to have a tool to fill the missing pieces. The tool would be more useful if it could do both fill the missing pieces before the coding agent gets it AND help someone write spec alongside refining the ticket before the ticket is considered ready for dev.
Instead of a tool that only fills in the gaps after the ticket has moved to dev, it would now be a tool that exists next to the process of refining a ticket.
It'd be nice if the generated documents could be automatically fed back to the source system, but I'm afraid it will complicate the things too much for me. At least, my system can synchronise the source if it got updated after the generation of the specs is done.
Well it would be an artifact tool or whatever that hosts the documents and adds links to the docs in the ticket. I can imagine if you want to be JIRA native, you would have to write a jira plugin which is an headache yeh.
Atlassian added their own AI bot recently. I'm not sure if it also helps users writing documents though. Personally, I'm not entirely convinced by the idea of making the same tool to serve as a writing assistant for humans and a compilation util for coding agents. They look similar, but have a slightly different set of requirements. The Markdown specs, for example, aim to be a concise instruction for agents (to save tokens and improve prompt adherence), while those for humans can be more verbose, and can contain additional images or elaborations that would only confuse AIs.
Well, we devs are moving towards a world where we'll be prompting AI the best way we can. And if you box off the artifacts to only the prompt generation for the code agent, you might be missing out on stuff. While if as a team you can be generating these artifacts during/after meetings and review them before handing it to a coding agent.
It indeed would have to be clear that these are artifacts for AI, they need to be concise, token efficient etc.
Prompting skills will be a skill/job in the future.
I do agree that it would be ideal to have AI and humans work together using the same workflow. But in a meanwhile, I don't need a writing assistant as much as I need a spec compilation tool, and I have only a limited time to build such a system (which is why I have to work through this weekend 😉 ).
Now it can correctly delegate to a subagent to open a page in a browser and compare it with the Figma design source to find visual discrepancies.
And it can feed the information back to the main agent so that it can ask the coder agent to fix the issues:
The back and forth between the "tester" and "coder" agent is working pretty well now:
What model are u using?
Mostly GLM5. I used Codex for complex tasks, and Qwen 3.5 for visual testing.
hello
Guys.. I'm tinking of launching an Ai LLM API for $10/month unlimited usage
what you think will anyone wanna try it?
Whats the opinion on antigravity’s agent for game dev. I made a game in jsut an hour without touching my keyboard whatsoever. 1 prompt in chatgpt asking for a detailled prompt for the software. It did everything. Assets to index.html
You're usually limited to static or simple non animated 2D assets. And art style are what the game feel like a game, so yeah. So while antigravity or any other AI coding tool can program a game pretty easily, it can't "make" a game.
An API is useless if it doesn't show stats of the AI model. Is it in the same ballpark with GPT 5.3 or Claude Opus 4.5? If not, you're competing with cheap open source models.
I'd consider subscribing if the service is comparable to OpenRouter, but I don't think it'd be easy to offer as wide a variety of models at similar prices as they do.
As for making games with AI agents, the issue with using them for anything related to art (which includes games) is not what it can generate but how much control you have.
AI can be a legitimate tool if you can establish a workflow that gives you enough control to steer the output to what you want in an iterative way. Otherwise, it'll just generate slops.
And it can be challenging to do that because games are difficult to exactly describe or verify by prompting alone.
It's far better, IMO, to use AI agents to enhance the existing pipeline (for which they can excel) rather than trying to "vibe code" the whole thing at one go.
yo what do you guys think about an ai that create ais for exactly what you ask him for
and i know there is already ais that does that but thay keep crying about laws
What do you mean by "AI that creates AIs for exactly what you ask it for"?
Maybe an example would clarify what you mean?
Probably he is talking about AI systems that generate other AI models,
like Neural Architecture Search (NAS) or AutoML systems things
it generate ais by researching by it self and create it for you
if you haven't notice i like automating stuff
By "generate ais", do you mean things like training a model?
Or do you mean things like generating prompts, skills, mcps, etc.?
training a model
That certainly piqued my interest then. Can you elaborate on typical use cases you have in mind?
i just want to reach a point that i can automate everything 😁
there is no big thought behind what am making just breaking through the limit of what i can do with less time
Honestly speaking, I can't find anything interesting to add, in that case. Good luck with whatever you're trying to do, however.
i've been stuck here any tips
I've changed the code for the vision_manager a thousand time and the same problem keep showing
fixed it
Hay is there a way to get open ai API for free or something else works the same or should I make more template's
My bulletproof Agents file. refined over a year of wok with Open AI CODEX.
Try one of the model providers. OpenRouter, for example, usually offer (often less powerful) models for free.
mysticfall thanked ttv_qsilvaq
Dang... I didn't think it'd make me "thank" the spammer 🤦 Please, delete the message as well.
<@&1224208037892591666>
has anyone been playing around with Trellis2 yet?
that looks awesome
apparently it can generate things inside transparent containers - like fish in a bowl of water
I haven't spent much time on 3D-gen stuff yet. I only used Sam once to create a mocap from a video. I already have a good 2D AI workflow, so I'm planning to use it as a basis for generating 3D models, using things like SAM, Hunyuan, or Trellis.
getting hunyuan to work locally was fucking PAIN
it's good tho
to an extent
not as in it creates useful 3d models- but if you know what you're doing, you can make them useful
I can use a Runpod instance for such tasks 🙂
in that case go with trellis... hunyuan works on consumer gpus, my 3060 runs it perfectly well
trellis needs 24gb - which is i think 4090 territory? 5090?
may aswell rent a gpu then
That sounds nice 🙂 But I can easily run them on a cloud when needed.
It's not that expensive.
no? how much are we talking for let's say 2 minutes of generation time?
Let me check.
o shit
that is quite affordable
huh

i may tinker with that
prepare with an asset list, spin it up, let it run through all the generations quickly, and then not use it for a while again
This is the current price table for 24GB+ machines on Runpod.
Yeah they are pretty affordable for such a task.
might be worth a consideration to even go serverless if all im doing is just quickly generating 20 3d models and then spending a day post processing them
I have a serverless setup for a work project. But it's more suited for automated stuff, like running a render farm.
I'm running Blender on a headless mode with that setup.
For generating a batch of 3D models occasionally, I think just burning credits for a spot instance would suffice.
On a side note, I'm cautiously positive about those 3D models although I haven't used them much yet. From what I've seen they seem to do a decent job at image-to-3D tasks. Combining with the 2D workflow I have already, and with a help from a retopo tool, I feel I'd be able to create usable models as I intend them to be.
The only part that I'm unsure of is texturing. But I can do it myself with Substance/Blender if needed.
Yo I got into AI 7 months ago I was just a programmer got in game dev but blender was kind of impossible for me so I decided to make an AI that can do anything in blender I tried bpy and it broke everytime so I decided to make it work by seeing the screen and simulate a mouse and keyboard, got any tips.
I am trying a lot of things in ai
Mostly trying to automate things and it kinda of easy
well sorta.
the stuff they spit out can't be really used as-is, but if you know what you're doing you can always make it work and use it as a good starting point
generally blender and ai REALLY don't get along, because the software has changed dramatically over the years, and all datasets are poisoned with outdated, wrong and unworkable information
I expect I'd have to do some retopo, but it's still better than doing it from a scratch 🙂
always depends on what the endresult should be
i found 3d ai very useful when trying to achieve a ps2 style look
On a side note, what I really need is a texture generator that can create seamless, photorealistic skin maps for a given UV map, like that from MakeHuman models.
But maybe I'm approaching it wrong when we have DLSS5 now...
that's probably not happening any time soon, but skin is mostly a shader issue anyways, you can get away with almost no texture in many artstyles when making skin
I briefly explored options for making a photorealistic character before AI took off. And I was apalled by the amount of work it'd take to create texture maps like those used by the Digital Emily project.
Like you said, it wouldn't be an issue for stylised models, but when you're aiming for photorealism, it's really difficult to do it by hand.
ye it's always a question of how much you're willing to scope down.
anything photorealistic will always take an enormous amount of time, no matter how many tools you layer ontop of it
even if we said DLSS5 is perfect and makes faces look photoreal-
it'll look creepy as shit, because the animation isn't photorealistic, which leads to a massive disconnect and dives right down the uncanny valley
so now we gotta ai the animations too, which will screw with gameplay, and so and so fourth
That's why I'm interested in the prospect that AI might generate such textures some day.
Before AI, people used things like 3D scans which only covers a diffusion map that usually has shades and highlights embedded in them.
maybe in 10 years
I had briefly tested a mobile video -> animation pipeline and it worked decently.
And I ran it locally on my 4070Ti desktop.
So, I can basically "act" out any motion I need in front of my tablet camera, which I can easily convert into animations that I can import into Blender for cleaning up.
The only complaint I had was I wish there was a way I could turn them into an IK animations.
But I guess it's a limitation in mocap in general, but game studios use mocap all the time. So I guess it's not really needed to make good game animations.
oh sure you can do that, but there's a difference between "I can get an animation to work"
and
"my character behaves photorealistically"
as in-
there are minute little differences in his walking depending on if he wears heavy shoes or light ones.
or
when he gets into his car, all of his clothes actually rub on the fabric, and bunch up properly, as he searches for his keys in his pocket
I can imagine how things like subtle facial animations can make a night and day difference in that matter.
it comes down to miniscule details that, if they're incorrect, will look creepy if the still image has been made technically photoreal by ai
making something photorealistic is just a ridiculous amount of effort, and always has and always will
I tend to agree... but as someone who's working on a VR NSFW game, I have to set my goal high in that regard - at least I hope I won't be too picky with details when I play a lewd game 😅
you work on a nsfw game and you try to go photorealistic? 
that seems very counter productive...
You can always dream. And as for productivity, I guess that's why we're talking about it in this channel. 😉
I feel we're slowly collecting all the necessary puzzle pieces to achieve that, with things like Godot's Human Shaders, Nvidia's fork of Godot with RTX & DLSS 5 support, and so on.
The one glaring missing piece I see is textures. To be specific, a way of easily generate a variant of them without having to painstakingly create them from photo scans by hands.
By the way, this was a test I did with a purchased model to see how much fidelity I could get out of it in a realtime environment.
I think that's an achievable goal if Godot, DLSS, AI, etc. keep moving on their present direction for a few more years.
On a side note, I also managed to hook that model into an Audio2Face setup, which generates a life-like facial expressions & lipsync animations procedurally.
I had to abandon it because Nvidia decided to close up their platform, and I hated Python too much. I used an open-source alternative called Neurosync for my game. But somehow the project maintainer suddenly disappeared, so I'll probably have to look for an alternative again.
example codex prompt after running a codex prompt and upgrading it with gpt pro 5.4
so how is everyone
are you working on anything?
a roguelike game with 2d, isometric and 3d game modes that are interchangable
I'm making finishing touches for the "spec-driven agentic workflow" that I developed using OpenCode:
I successfully tested it to create a frontend and a backend project. I'm planning to reimplement the legacy modules belonging to the project one by one using this setup.
How much RAM you need to run a model like that locally?
I'm just using Codex/OpenRouter/Z.AI etc. now, although test to see the feasibility of running it locally is planned.
Hmm so how reliable it is? Does it actually produce usable code or need fixing?
Depends. What I've seen so far, it exceeded my expectation, but it's partly because it's optimised for certain patterns specific to my project.
The generated code is pretty reliable since I tried my best to add a lot of guardrails. The backend is a pretty simple CRUD server, but it has 198 unit tests, and AI automatically runs hundreds of static analysis checks.
After I demoed the project to other team members, one of them said the open source version of jOOq doesn't support Oracle. So I just told it to "replace the persistent layer with a MyBatis-based implementation" and it did it without any error while I had a cup of coffee.
Later I found a few code design issues which I addressed with additional prompts, but it got the functionality perfectly fine at a first try.
Codex 5.4 is great
This is running in Chrome
custom engine from scratch using Codex
uses planck physics, polyanya for pathing, PIXI for draw calls
local detour, game loop, controls, and the ORCA/RVO2 written by Codex
oh nice
if you use codex long enough you get
one roguelike 2d-isometric and 3d modes which can be switched around easy
i use gpt 5.4 pro to enhance codex suggesitions, and then feed it back into codex 5.4 mini ( and it can spit out a full 7000 LOC new feature across 40 files in about an hour) and one shot it
who made the art tho?
has someone documented their gamedev journey relying on ai ?
i saw bunch of posts recently where people made games as hobby for themselves or their children and they became popular
i haven't done any game dev in a while. But since the last time, there has been a huge shift in terms of AI generated code and all these agents
I wanted to know, to what extent are we relying on AI ? coz i have my reservations against completely handing over control to it
I'm seriously considering to start my project over to try an AI-driven workflow, after having a success establishing it at work. I have been very sceptical about using it for gamedev, and I still feel that way. But having so little free time and experiencing what AI agents can do, I feel compelled to give it a try at least.
I expect a lot of roadblocks along the way, and I'll probably spend more time building the harness than making content at first. But if I commit to this path, I will share the workflow as an open source project.
what do you mean harness
I don't know what that means
this is like dozens of folders and a hundred files and 10's of thousands of lines of code and I did it with codex
Codex 5.4 IDE extension is really good
If you want to go beyond one-shot generating simple projects, you'll need more than just prompting.
And games have additional difficulties, especially if you want to build non-trivial 3D games using AI.
the video I showed is a from scratch engine I made using ai and I wrote almost no code
I can see that. But it's because the game is simple 😉
this is the level editor
this is the delauney triangle navmesh decomposed into convex polygons using polyanya pathing
Well, try telling it to build a VR sandbox game with MakeHuman characters with proper IK set up, for starters 😛
these are the ORCA/RVO2 algorithm reciprocal avoidance overlays to prevent ship to ship collisions
|
it has recoil, reticle bloom, and inaccuracy
it bakes different sized nav meshes per ship size
Let me put you into a perspective. I tested my workflow to build two "simple" non-game projects last week at work.
They involve some 20K LOC in 200 files with 300+ unit tests.
And my plan is to apply that standard to rewrite other 4-50 subprojects.
when you destroy ships depending on where the damage came from they undergo voronoi decomposition into physics controlled pieces
It's just that we have different ideas of what is "simple" or "complex".
lol ok buddy
good night
And even for actually simple projects, it's always good to setup some workflow which can orchestrate agents and let them incrementally discover contexts as needed.
Otherwise, you'll see them perform increasingly poorly as your project gets larger, and your token consumption will skyrocket.
Also, even the best models can and will make mistakes. You need a way to automatically validate their output in one way or another, which can be challenging in game projects.
My advice to you if you want to elevate from toy projects is to focus your effort on finishing one project instead of spreading thin on 50 subprojects. That is a common mistake newbies make 😉
That's sound advice. The only problem is that they are not toy projects, but a product we sold to many customers including banks and governmental departments. And I am not a newbie but someone who's been programming, likely long before you were even born. 😉
I noticed several times it took over a minute for you to only type 1-2 sentences. If your words per minute is only like 20, that can be a huge bottleneck to succeeding in your journey learning how to be a programmer.
My advice to you is you should should use a typing practice site, for example https://www.typingclub.com/ 😉
you've been typing for like 3 minutes
definitely use the site
your WPM must be like 10
I'm not going to read anything you write now so don't bother
I'm leaving the channel, have a good one
I'm sorry I have hurt your ego so bad that you had to resort to such a childish attempt to insult me (which wasn't my intention).
Just 2 things: 1) English isn't my native language, and 2) it has nothing to do with AI-dev.
If you want to start a drama, at least try to do it without going offtopic.
@pale light
how is the ai better with bigger projects tho
are you actually just vibe coding stuff, or coding yourself and the ai is just kinda helping along?
Took yer jobs
The different pieces of a project represent solutions to all of the problems the project is meant to solve
So if I have a command "send a unit to point B"
then that causes the strategic planner to do A*, then produce waypoints, then another system creates manuevers based on those waypoints
The folder structure, imports, comments, and existing code examples in context are a solid description of 'how this repo solves problem xyz'
The next time you ask the AI to make a change, the pattern of what is already there makes it far more deterministic in how it will solve the problem
If I ask the AI to
"make me a new ship that looks like the gunship, it has 4 turrets, they fire missiles that travel for 0.5 seconds then activate thrust, they seek their own targets, they explode when within 200px of their target, and when they detonate they create a flak explosion with shrapnel"
My game has:
-Hardpoints
-Gimbals
-Weapons
-Projectiles
-HomingCalculators
-HomingTargets
-ThrusterDelays
-InitialVelocityOnLaunch
-LaunchGates
-ProjectileFuses
-OnHitEffects
-OnDeathEffects
So creating that new ship is a matter of instantiating the set of components together in the right combination
More content is just authoring within the existing systems instead of having to write new systems
My experience has been that once I get the systems right, the combinatorics of those systems becomes easier and faster
the AI will do a decent job making that new ship and new behavior, then I open up my ship editor and see how the specific positions and angles of those weapons could not possibly have been placed well by the AI (the AI can't really see or play my game, but it can make up some initial numbers)
so then I go into the ship editor and just fine tune the angles, positions, ranges, cooldowns etc.
if you make sure each piece has a small number of responsibilities, and you ask for a change, the AI only needs to look at a small number of pieces to make changes so the context doesn't really creep upward that much
I had 3 agents working on the same repo at the same time, one working on the smoke system, one working on a custom ORCA pathing for large ships with compound discs, and another doing Tauri level authoring at the same time
the 'contracts' and tests for each piece are like promises that each piece will take certain things, and provide certain things, so each piece of the project can black box its internals and be trusted to adhere to the contract
so multiple pieces can change simultaneously, and you can refine the internals of each piece, given that the 'contract' is not broken
the shape of what is passed in and the shape of what is returned must stay the same
the quality of that result can improve as long as the 'shape' (like the waypoints are a series of x/y coordinates)

so
even the context does not actually grow necessarily when the project is huge
is it 100% vibe coded or do you actually know what's in your code?
everything is abstractions, so if you have methods that solve the problem, the AI only needs to chain your existing methods together instead of write the internals
I can actually see that my AI doesn't need more context
no i get how programming and how ais work, and i get how agents work too, but it's just in my experience - if something fucks up, or is not buggy, but "human buggy" - as in "the ships rotation doesn't FEEL right" - the ai can't really help you
(it says how much context it is using)
sure the ai can implement A*
no shit
I know you know how software works I'm just explaining why it seems to me that AI actually seems to get a little better once the project is bigger
but if fighting your npcs isn't fun because of the way the gun mechanics play with the movement-
how are you gonna fix that using an ai? 
I was surprised how it started fitting together and being more consistent and faster when the code was larger
the ai can't actually play your game and experience it...
I spent like an hour carefully tuning the algorithm for how the spread growth worked
I'm a QA person/playtester
I definitely have to tune everything myself
in the gif you can see my crosshari growing and the shots getting less accurate
the AI's algorithm for that was terrible
I wrote my own
okay, from the lil video you posted, you're making a space game, im a lil triangle spaceship, and i shoot other spaceships.
cool.
Now let's say you want to make an enemy that's a suicide bomber.
my bang bang face mouse algorithm was hand written too
and the AI didn't bother making it all object oriented / component based, to account for the fact you may wanna make different enemy types
is it gonna refactor everything, or just tack on another hardcoded enemy
suicide bomber enemy needs a completely different ai, but still use the same A*
but now the goal changes every frame
which means prediction, recalculation, maybe even chunking, etc.
and i mean sure, you can tell that to the AI, and it might work-
but how do you know
it jsut feels like you're pulling a lever on a slotmachine, hoping it spits out a winner
the ManueverSystem requires intentions and locations
it takes those intentions and locations and then outputs specific desired manuevers like
"face left and move upwards"
the FlightController takes the ManueverSystem's output intentions and does math to figure out with the given ship what the right actuation of thrusters are to cause that manuever to happen
If I wanted to make a suicide bomber I'd create a system that produces intentions and locations and set those on the ship, which would get picked up by the maneuver system and the rest would be history
There is a system for producing explosions, shrapnel, flashes
I might give the ship a weapon that fires a projectile using the existing system that has
lifetime: 0
onDeath: explosion, friendly fire on
The suicide bomber AI system that gives its manuevers would tell it to fire its weapon when it is within X range of an enemy
There would be an event sent to the event bus that an explosion happebed
the sound system listens for the event and plays a sound
the flash system listens for the event and shows the flash
but I coudl talk to the AI for 30 minutes with these suggestions and it would come up with a better design than I could probably
yes i know how one would make a suicide bomber
the problem is, if you've told the ai to setup your game NOT factoring in the possibility of evental suicide bomber enemies-
no I'm saying thats what the AI would code in my game
it'd have to refactor half your ai & enemy code
and those systems are in the game already
i think you don't get what i'm saying
but i also guess that's kinda the point. People who let ai do all the things for them, don't really need to actually know what's in their codebase
and if it works - cool.
but if it doesn't - time for a new project i guess...
are you saying you think that my
Ship - Hardpoint - Gimbal - Weapon - Projectile
Component system with
Auto Aim Controllers
Direct Player Control
Firing Gates
Cooldowns
Fire Rates
You think its not modular and reusable?
will you watch the video
no, i think that your A* functions, your pathfinding components and your general enemy ai classes are probably not architecturally sound, and you may be taking on alot of technical debt
i don't care about a random float that defines your cooldown on your weapons 
i care about the moment where you say
"alright, i wanna have 400 spaceships, fight 400 spaceships from team 2, without bunching up in the middle, and they all need to have dogfighting ai, while choosing their opponents intelligently"
THAT'S when ai is probably gonna shit the bed
because that's when it goes beyond boilerplate
I guess we'll find out in a few weeks
good luck
thats running on my laptop
(this is from 9 years ago)
If I run into issues in any particular area and the AI fails I'll just fix the code
I studied physics in college
I'm glad that I decided to start my project over to switch to an agentic/spec-driven workflow, although it meant abandoning everything I've done since last summer.
Before the change, I could only work on my game project on weekends. But now, I'm running three agents concurrently on a Monday, two for my game project, and one for my work. 🙂
what do you guys use for agent vibe coding btw? 
I'm using OpenCode. And you can see my current setup from here: https://github.com/mysticfall/alleycat
Note that it's very incomplete now, since the project is literally two days old. I'm trying to establish a similar workflow I made for my work projects.
me too 
okay so, judgement free space lol
I'm trying to make a game with 100% LLM code just to see how it does
so far I'm extremely impressed.. progress is going faster than I'd expect, 80% of my time is just writing planning documents.. the code is mostly good and works, but requires help/bending into the right direction often
do you have anything to show?
I've been working on a game using AI as well
I don't want to share it yet 
I think I have a working environment for making 2D UIs. I just started working on creating a 3D workflow:
Again, I'm happy that now I can work on my game project on weekdays. Today, I'm running 2 agents for work and one for gamedev.
It's going good so far. Made a test scenario in which the agent loads up a character and visually verify the scene from multiple angles:
Started working on an actual feature (an IK setup for neck-spine bones):
The agent failed to use a proper framing for screenshots, however, which I need to fix.
Fixed 🙂
rts controls
ORCA vs terrain
rudimentary first AI pass
moving stations where ship AI understands how to avoid it
normal map shader support
I refined the visual workflow. And these are what the agent created with the tools I provided it to implement a neck-spine IK node:
man, I love normal mapping
never done it myself but I love looking at it
if you have a game you like and a youtube channel with gameplay videos, here's my bot https://github.com/eternalyze1/youtube_imitation_learning_game_bot
Hey 👋
<@&133522354419662848> Could you clean up the spam? Thanks!
Thanks! 🙂
mysticfall thanked hallojoby
Hey all, anybody following the DLSS 5 situation? When I heard how hard people were coming down on the tech I felt like it was a ridiculous idea that it could only make things photorealistic. I did some experiments with Comfy Cloud and ControlNet to look at what such a technology might be able to do if directed differently. I wrote an article, but just scroll through if you only want to see the images!
https://aitalesfromthefield.substack.com/p/nvidias-dlss-5-controversy
I wrote my thoughts, then read your article, and I think we're basically saying the exact same thing.
Jensen has admitted it is basically just image to image ai image gen (it takes in input image, prompt, and outputs an ai generated image). There is no reason why that should be limited to forcing faces toward photoreal, but it does come with the 'training data common denominator' problem where its going to have a mind of its own that isn't the artist's because of the training.
Prompt + Image = Output
I can take a person and say "draw them in ghibli style" and then the output can look like ghibli style but with the AI generated feel. Do that quickly and with stability over time, and you have DLSS5
I think its exactly as good or bad as ai image + prompt to image AI generation is, which depends on the user, but it more often than not gravitates toward bad unless the person using it has style and sense.
If you take low poly game with good gameplay and say "make it look like photorealistic clay stop motion" and it does, that is kinda cool and can be creative
If you take a decent looking game and ask it to "make graphics better" and it looks like ai image to image pushing training data's real human faces onto games it looks sloppy and lazy and terrible
If a game is made from scratch with using DLSS5 in mind, you could think of trying to make the visuals as creating the right prompting information to intentionally let DLSS5 'finish' the look for you and then have DLSS5 do something big and creative with it, like "make the whole world look like photorealistic realistic yarn"
Personally, I've long believed that 3D will survive in the age of AI, but its role would shift from rendering to providing a "source-of-truth" about geometries and style guidelines.
I think we are pretty aligned. I also like your examples. Regarding style text prompting, I did give that a try, but I found that the other factors (img2img, depth map, edge maps) constrain the output too tightly for the prompt to have much effect. That's why I went with LoRAs, as they have a deeper effect. Also note that the graphics systems provides lots of buffers and reprojection, so it can be both more stable and semantically correct than a pure img2img diffusion process.
Yes, I definitely think that's a possibility. My question is how much can generated content be anchored to the world? Will clothes switch from having zippers to buttons just because you turn around? If that's an issue, can we either make the generation more predictable, or record it to keep it consistent temporally?
That's why we need 3D model as a "source of truth". I think we'll still create zippers on clothe models in that case, but instead of spending hours on making zippers as a 3D mesh, simply drawing a rough shape of zippers in the texture will suffice to prevent such issues from happening.
Let me rephrase. Unless someone finds a different approach there will always be a gap between the literal geometry and the AI completed frames. This could lead to inconsistency. Perhaps the new discipline will be working out the minimum polygon/texture budget required for various things such that the inevitable differences are indistinguishable for most players.
anyone know about advanced AI for survival horror games much like that of Nun Massacre (unity, puppet combo game) or Aka Manto (unity also, chillas art game)?
i mean like AI that will hunt you down until you quit the game and probably hunt you down in the real world too
every little audio cue matters, the flick of the lighter, your feet padding on the creaky wooden floor, the door's frail wood structure screeching as it opens, the list goes on and on
ambushes, playing with it's food like a cat hunting it's prey and waiting for just the exact moment to strike and end your run then and there
and if it sees you, know that you are already dead. you never stood a chance. by the time that you realize your stealth has been compromised, you're already being torn to shreds by a knife forged long before your time.
Hiding is a temporary fix to an inevitable problem. Eventually, no matter how long you stay in a locker or under a bed, it will find you.
One way or another, it will weaponize your paranoia and fear against you in such a way that you spend nights trying to just comprehend.
so like anyone know how I can begin doing something like this
very passionate about this project i've been cooking up
had the idea on the back burner for a while but i figured it was time to focus on it and begin writing a story and lore
I'm not an expert per se. You could certainly have something like behavior trees for the enemy's internal state. Objects that emit signals like sounds and enemy sensors to detect those signals. Most games cheat a bit. I believe Alien Isolation semi scripts when the alien turns up and doesn't bother with pathfinding when the alien's in the vents. Maybe people have been using generative ai too? If so I'm out of that loop.
idk
not much of an AI guru either
I think the most AI's done for me is motivate me to keep playing Aka Manto
Depending on your starting point I'd suggest checking out the AI and Games channel, it discusses how the AI in a lot of popular games works. https://m.youtube.com/channel/UCov_51F0betb6hJ6Gumxg3Q
Hello
anyone can help me understand the reason why we use Bias in Single Layer Perceptron (Neural Networks)?
I understand its to offset from the origin
But why are we moving it away from origin?
I think I got it, as I understand its to move it closer to the threshold, or else all values will stay around origin and we can't a simulate real world scenario
it's very early days on that stuff
but i can't see actual photorealism work without useage of ai 
there's just too much detail that needs to be handcrafted, you gotta put some sort of ai in there somewhere if you wanna bridge the uncanny valley
"All of those embodied agents are seat opportunities," Jha said, envisioning organizations with more agents than humans — each effectively a user that must pay for a software license, or "seat" in industry lingo. https://www.businessinsider.com/microsoft-executive-suggests-ai-agents-buy-software-licenses-seats-2026-4?utm_source=reddit.com
what's up
ai generated web page, with ai playing soccer lol made ai with ai
Stumbled upon a video that provides a good overview on where AI-3D workflow is at currenty: https://www.youtube.com/watch?v=PyxnLyRfKFo
As i didn't saw any assets of Stalin so i created wolfenstein 3d esque screen for ussr campaign
I mentioned of NVidia's Lyra today on another channel, which looks nice but is proprietary. And now we have an open source alternative that we can freely use.
From the video, it looks like there's still a gap in quality between the two. But I'm glad that we at least have an open-source option for a world model now.
I vibe coded this procedural blended leg animation system by describing a bunch of algorithms to the AI
this works.
I'm not sure I know of any large bodies of work that I could even in theory have AI just go on a big loop and do autonomously for long durations right now.
I just talk to the AI like its a person and have conversations about how to get what I want accomplished, form requirements, get detailed, form an implementation plan, then have it follow it to completion.
I just briefly tested NVidia's AI animation generation model, Kimodo, and it was not really usable for me. The tech looks promising but it may need more time before becoming production ready.
Hmm this is nice only time i give ai the credit for that
<@&133522354419662848> <@&1224208037892591666> spammer multiple channls
I think that's a sensible way of developing any complex project, not just games, especially when using AI agents.
And that largely coincides with my own method, building a project using components, each having its own specification, tests, integration tests, and a test scene.
Probably the only part I may have a different opinion could be that about distributing LOC evenly across the project.
well it was more about how many LOC written per turn. each turn it was giveng commads to split those lines into seperate files.
basically maximize your output so each run is maximum context length
I think imposing a rule on LOC might have an unexpected impact. And it's better to manage context using things like a compress/compact plugin anyway.
I think 90% of the context being used can be the thought the ai is having about your task, I've been able to fill 256k context with a question about 100 LOC because the amount of research and thought required to understand and solve it was big
You could have a file 50x longer that uses less context
ai produced procedural normal mapped 2d sprite atlases workshop
so I can make rotating 2d stations/structures with doodads and details that get lit as they rotate just from code
and the smoke has a fluid simulation now
are there any ai game engines yet or?
Probably not soon
ah
Seen 1, but it's very basic
ooh?
i mean ue and unity have ai right?
@ashen plaza is making one
Panmathos is my project. Simply put its a middle man that learns and enforces governance and determinism, able to build and expand its own tools.
used ai quite a bit for a separate tool to bake lighting information onto sprites for my project. The results are not too bad although there is only self shadowing and fairly low detail shadows. but happy w/ the results regardless.
nice! I'm baking normal maps procedurally for my game too
trying to see if I can make good low latency multiplayer with prediction using webtransport protocol that became standard in March 2026, its brand new and seems really good
the lighting looks great.
