#ai-dev
2465 messages · Page 2 of 3
I made an evelution simulator
This is the evolution graph (Green is herbivore and red is carnivore)
The numbers on the top are the final herbivore and carnivore amounts
Is this good anyone?
(I stopped the simulation once the ecosystem was stable)
Hi everyone, I've read that Decision trees are natural to tabular data, and, in fact, they currently seem to outperform neural networks on that type of data (as opposed to images).
So I want to use them, but when I actually test Decision Trees on the very basic sklearn tabular data load_diabetes, I find out that the model is pretty much garbage.
So I wonder, did I do something wrong, or is the TreeRegressor model very bad ?
UPDATE : I tried to use the Tree on my own data, and it worked quite well 😊
So I guess it's just the diabetes sklearn data which is bad 
(better than lasso or ridge at degree <3, but worse than ridge at degree 3)
@balmy patrol All that comes to mind is that it takes less effort for cpus to search a binary tree than running a sample set across a model.
However, I have little to zero training in ai development, so that's the limit of my imagination. I wouldn't be surprised if my statement is false.
I can believe that for classification, but for trees applied to regression, I'm not 100% sure since linear regression consists of just a few multiplications and additions. I'll do some research !
I couldn't find a lot of information on the Internet, so I tested it directly with the models I made and my training set (about 1000 rows).
The tree is indeed way faster to make its prediction than the polynomial models ! (With a slightly reduced accuracy though)
Although ridge and lasso only involve one matrix dot, there are 42 features to multiply in my case. I read that with many more feature (> a few hundreds), linear regression is significantly faster than decision trees, iff all of the features have low interaction.
do you know why ridge @ 4ms have such a low count?
At each run, I randomly split the dataset into 30% for testing and 70% for training, which explains why there is so much variance in the results.
About the surprisingly low results, I didn't print the coefficients, but I think it has to do with the coefficient matrix's sparsity.
Which also explain why the Lasso regularisation is faster than Ridge : those normalisation algorithms tend to bring some coefficients down to 0, which significantly reduce the computation time.
I often have a lot of coefficients that are like 10^-5, and sometime they reach a low enough level to be considered as neglectible and set to 0, I think that's what happened here !
What enables a normalization algorithm to bring a coefficient down to 0?
Basically, normalisation will apply a penalty to the coefficients when they are too high, to avoid over-fitting. Smaller coefficients are favoured.
On the graph below, β1 and β2 are the parameters, and the most optimal solution is ^β. But because of the normalization, it cannot be chosen as the final solution because it involves too high values for β1 and β2. The red ellipses represent good, but least optimal solutions. The intersection between the blue zone (coefficients authorized by normalization) and the first red ellipses will be the chosen solution.
Lasso is on the left, and will tend to bring down coefficients to zero because it authorizes higher value of β1 if β2 is at zero.
Ridge on the right, is more permissive, resulting in less sparsity but more precision.
What's to consider is that often, a parameter is way more significant than another. In my case, X1 has an effect of 10⁶ and X1X2X4 has an effect of 10 (approx.). If the normalisation authorises only values inferior to 1 for instance, it will choose something about an effect of 1 for X1 and 10^-5 for X1X2X4. The implementation of these algorithm consider that an effect of 10^-5 or 10^-6 is equal to zero, to favorise sparsity, so the very low coefficients are just set to 0.
Therefore, in my case, X1X2X4 has been set to zero.
It's actually a bit more complicated, as the normalisation is not binary (like "1.1 is not okay, 1 is okay) but continuous (like "1 is better than 1.1"), but the principle is the same !
Thank you. A lot of words and new vocabulary for me, but what I received was that ridge allows more matching poins along β2 and lasso has more along β1. And instead of the algorithm making any coefficient 10^-5, it'll make it 0 in this case. That's all my brain can handle, thank you
afroradiohead thanked Owen
You're welcome 😊 it was actually good for me too, explaining something is a very good way to learn, I could strengthen my knowledge by coming up with this explanation.
If you want to learn Machine Learning and have notions of mathematics (analysis and linear algebra), I highly recommend the course by Andrew Ng available on Coursera, which is free. He explains the mathematical foundation below those algorithm, which is very useful to know later down the line.
working on context based steering
found an issue where if the object is in a corner then there's a possibility for the interest scores to be 0 for everything
turns out the solution was simple
just don't clamp the score very epic
can't wait to see what issue that causes
@hushed bone I’m curious if you have a visual that represents your findings.
how would i detect when an object is traveling towards me
With the object's direction vector, you'll have to see if it's length to a certain extent will connect with you.
Is your world subject to gravity?
no
eg
frame 0:
obj pos 0, 1, 0
play pos 0, 0, 0
frame 1:
player tick
obj tick
obj pos 0, 0,1, 0
frame 2:
player tick
obj tick
obj pos 0, 0,2, 0
frame 3:
player tick
obj tick
obj pos 0, 0,3, 0
ect
check if the object's velocity is facing you
use the dot product
you'll have to do a raycast to see if it will actually hit you
(line intersection test)
Aren't raycasts expensive?
a general purpose one (is slightly)
if you're just doing a line intersection test between two spheres, no
don't do something like physics.raycast (unity)
alright
would it be better to raycast or predict movement?
eg, an object may move in any direction, while also facing any direction
?
btw if you want to do target leading, that requires a bunch of math
somebody can help me with an ai for chess game?
look up AB pruning
that will help you find some examples
Yeaah, I have seen that algorithm also minmax but I don't understand a lot
i recommend some youtube vids, unfortunately the stuff isn't trivial
i think you could use steering behaviors
Looked through pretty much all the documentations online and tested out some code, but nada
Been working on achieving this type of spline path for AI movement for a while. So far on my free time researched about splines, and did some R&D sessions to achieve what I want. Due to lack of information on the web, I'm wondering if I'm on the right track or not.
in the video, they constraint the spline points to navmesh polygons, which I also do the same (only difference is I work on pre-created path with A*) but so far couldnt achieve the same result as the video. I wonder if they re iterate all the spline points and re-set the tangets or just trying to find a perfect tangent value for last spline point.... or is there any fancy math formula that can solve everything without much hassle like this?
Any inputs or further keywords to research about is appreciated since I really couldnt find any direct answer to what I'm looking for in anywhere
So far (in Unreal Engine) I played with spline object for a while, tried to observe how my desired shape can be formed, noted the value of tangets based on the shape I created etc. and decided to gather info about current state of the spline by factoring previous and next spline points (their locations, angles between each other etc) in a loop while creating spline points on top of the created A* path points and apply some hardcoded values based on the values I have.. but not sure if I'm on the right track
is there good reference or tutorial for RPG boss AI? some one familar with this ? doing boss phases and ai state machine
well i would first start with the behaviors you want
you'll probably want some form of sensory input to determine which action to take, one of those actions can be health
ie if player is firing rockets, be defensive
but if health is slow, shoot more missiles
etc
Trying to make an AI system good enough for a FPS game where they’ll work basically like other players in the match but don’t think I can make them smart or strong enough
AI is hard
Yeah I literally spent a year trying to make good AI and the game got a 5/10 on steam and all the complaints were about how buggy and just awful the AI were
So before I make another game just wanna go over that
see if theres any good books on ai in games
what techniques did you use?
i recommend looking at GOAP
check game ai pro 360
most common one around the industry
@tawdry yew this is my goto resource for curves. That gdc video appears to show the application of poly-bezier using nav mesh vertices as anchor points and then setting the control points on either side so they remain in the nav mesh poly and have the same slope
I think your strategy of starting with the nav mesh A* path is the correct one, each vertex in the path will be an anchor point for your curve, you then just need to pick control points leading in and out from each point
I would probably use hermite or catmull rom
Trying to train a custom pix2pix implementation, but the outputs have a checkerboard pattern with gaps. If there something wrong with my implementation or is this a matter of an immature model and I should give it more training time?
yeah that looks like a bug
ah yep and think I figured out why. I'm calling expand_dims to fit the dataset which is probably filling it with empty values or something. now I gotta figure out how to fix it
turns out I was doing a whole bunch of thins wrong.
got it working though, big pogs
What framework are you using for this implementation if any?
Writing it in Python used this https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/pix2pix/pix2pix.py as a template
after ~500 epochs diffuse to height. I might need to prune a lot of lower quality textures or let it train much longer. Quality does seem to regress which could be due to that.
Hey, I'm so sorry for replying late. I guess somehow I missed the notification. Thanks for the link, it helped me clarify many things in my mind.
That gdc video appears to show the** application of poly-bezier using nav mesh vertices as anchor points and then setting the control points on either side so they remain in the nav mesh poly and have the same slope**
you then just need to pick control points leading in and out from each point
can you please elaborate this two sentences more? is this something I can find in the link to learn more about?
I've search through Unreal Engine's source code and I found a catmull rom implementation. Will try to set tangents based on that algorithm to see if it gives more input or not
Am I understanding right that catmull rom implementation will provide what I'm looking for about setting tangents by the way? It's been recommended to me multiple times, especially in this server but I guess I missed (didn't ask) the point of them. So far in my reads it was mostly looking like it's just a way to make spline points more smooth rather than specifically adjusting tangents based on desired shape
i'm not sure
what a funny concept "Instead of making a gaming im going to make something more complicated to do it for me" ai dev is weird
where the fun is

using catmull rom did it
I didnt get what exactly I wanted but result is 1:1 same
i just need to find a way to make system avoid creating weird curves on edge cases like this
is it possible to make an ai that pee pee poo poos?
What's that?
Hi,
I am considering a game, where you help a group of stranded npc survivers, but I am very uncertain about which AI setup I should go with.
What would be easiest to make a group of NPCs do the following:
- Find food/water
- Sleep/Eat/Drink when needed
- Working together to get the camp working
- Explore the island
- Build a large bonfire to get noticed by ships/planes
that is quite a complex AI system
I recommend looking into Goal oriented action planning (GOAP)
@patent blade yes, and maybe it is to much, but I now my mind is fixated to that idea. GOAP sounds like a possibility, I will read op on it and see if it is to complex for me.
Thank you for the suggestion :D
PovlEller thanked CobaltHex
Ask the RDR2 horse devs
Does anyone have good resources to learn Behavior Trees at an advanced level? It seems most things are very beginner.
I recommend taking a look at something like GOAP
and I recommend breaking down objectives into small modular parts that you can build complex behaviors out of
e.g. if you want an AI to go to the shop and buy something, that would be movement, opening doors/etc, purchasing action, etc
I’ve been looking into GOAP and on paper it sounds amazing. Based on what I’ve read/been told, it’s much more difficult to debug than a standard BT. I might give it a go though just to test it out. Thanks for the response!
Miinoy thanked CobaltHex
Difficult in what sense
Hello everyone! Kind of an odd request, but my name is Brianna and I'm an MBA student at UCLA Anderson conducting research for my master's thesis. I am looking to interview independent game developers about their current development practices, use of asset stores in Unity and Unreal, and what types of technological gaps currently exist for you.
This is a market research project, our team is helping an external client with advanced machine learning models figure out where there may be a market opportunity to create a product/set of AI assets for games. One major part of this thesis is primary research and conducting interviews, which is where you come in! I thought that this channel may be the best for people working with AI.
If you'd be willing to have a 30 minute call with me in the next few weeks to talk through some of these questions, please DM, reply here, or email me at brianna.krejci.2023@anderson.ucla.edu so I can reach out to you. Thank you so much!
Ai-dev???
Hello, we are 3 students studying UX/UI (User Experience & Interface) Design and are working with a company that is a visual no code prototyping tool for game narrative (they allow a user to rapidly put together and test a story, then export the code). Their goal is to help build, better stories and reduce the time it takes as well as the barrier to entry for someone to build a new story driven game. We are therefore looking for people that are interested in gaming and / or want to develop their own games. https://forms.gle/Tifd4SYVKx23BcBT6
Related to GOAP; I suggest taking a look at this article that details an approach that replaces action planning with sequential composition to select actions. If you can wrap your head around working back from the desired state it’s very debug-able and expandable https://www.gamedeveloper.com/programming/how-we-developed-robust-ai-for-dwarfcorp


🇺🇸
Hi Everybody,
I am working with GOAP and already got some good progress with it.
One feature, I would like to get into my game is that NPC's try to be at a safe location at night. Currently I cannot see how to get that into GOAP or any other system, without hardwiring an other system that looks at how long time it takes to nightfall and then forcing the NPC to go back in time to reach it. Which will lead to a NPC trying to get some where in the wilderness, getting almost there and then turning back, day after day...
Any good ideas to whether GOAP is up for that task or how to approach it?
definitely armchair experting here but a thing my friend was doing was pairing utility AI and GOAP together
basically utility AI would select which planners were valid, weight them
then GOAP carries out the plan, reporting back to the utility AI with results
if the plan failed, the utility AI would carry out the next best plan and attempt it
if you mean "trying to get somewhere in the wilderness", do you mean getting halfway to another safe location and then turning around when night falls?
or do you mean getting close to a location where they need to gather something for example and then not completing the action because night is falling and they need to get back to safety?
if you're trying to get from one safe zone to another maybe set a boolean when night falls, then check and see the distances between the NPC and the two safe zones, then determine which is closer and path towards that
In general, it would be both.
But I think I could get by with some more complicated action for moving from one safe location to another safe location.
So the main problem is Investigating some area.
I am considering, having cost closely resembling time and when planning limiting cost to however much is left of the day. But it seems hand wavy for me
has anyone else been thinking about generative ai and the future of game dev? curious to hear everyone's thoughts on the future
if you're talking from an art perspective it'll be helpful for indie devs
mainly for low poly/res stuff
concept art and getting images to communicate with other devs as well
other than that kinda remains to be seen
gpt3 chat can be helpful to rubberduck a problem against but for me so far it hasn't been able to do that much but maybe my prompts need to be more specific
hi i am looking for dev's for my rust clan server and im looking for developers for custom maps and i was wondering how much i wil have to pay u for it
this is not the channel to ask that
The main bottlenecks stopping me from using these today are the compute requirements mainly taxing the gpu, which tends to already be overtaxed in games, and the infrastructure most machine learning is happening on is mostly python based, moving these networks tends to require alot of technical skill even if you're going from like PyTorch to (C++) Torch. It's criminally underutilized for editor tooling, but I think in games it's still quite hard to make it work.
Does anyone know how to speedup multivariate_normal.pdf from scipy.stats? Or if there is some C/C++ implementation that can be used in python?
The multivariate_normal.pdf function from scipy.stats is implemented in Python, so it is not possible to use a C/C++ implementation to speed it up directly. However, there are a few things you can try to speed up the function:
Use NumPy's vectorized operations: Since the multivariate_normal.pdf function calculates the probability density function (PDF) for multiple points at once, you can take advantage of NumPy's vectorized operations to speed up the calculations.
Use the parallel option: The multivariate_normal.pdf function has a parallel option that allows you to use multiple CPU cores to speed up the calculations.
Use Just-In-Time (JIT) compilation: You can use a Just-In-Time (JIT) compiler like Numba to compile the Python code for the multivariate_normal.pdf function to machine code, which can speed up the calculations.
Here is an example of how you can use Numba to speed up the multivariate_normal.pdf function:
import numpy as np
from scipy.stats import multivariate_normal
from numba import jit
Define the multivariate normal PDF function using Numba's jit decorator
@jit
def mvn_pdf(x, mean, cov):
return multivariate_normal.pdf(x, mean, cov)
Generate some random data
mean = np.random.randn(2)
cov = np.random.randn(2, 2)
x = np.random.randn(100, 2)
Calculate the PDF using the JIT-compiled function
pdf = mvn_pdf(x, mean, cov)
I hope this helps! Let me know if you have any other questions.
Thank you very much for your answer! I will try it
Sebastian thanked Mastercord
Does anyone in the community have any recommended open source AI projects to get that are still available?
I know ChatGPT has a few open source community driven competitors, and I wanted to get them while I still can.
Additionally, does anyone have any recommended open source AI image/video upres programs they use?
I really want to try and preserve (and hopefully contribute) to what's out there currently. I know there has been a large movement to keep open source AI projects alive while these private groups gain momentum.
heyy
I made an ai discord bot
wanna test it anyone?
@ai
anyone have tips on making ai for a large persistent open world? Im looking specifically for simple ways to design a system that allows an ai agents to run well close to player and also far away from the player
don't update AI that's not near the player
in the case of an RPG, you can build scheduling systems that can predict where a character will be a a certain time of the day
You could also steal from Minecraft and only spawn enemies with min a certain radius to the player
Despawnibg too
Has anyone used deep q learning before that could explain how you're meant to find the q value of the next state?
well yes you do that, and when they spawn you calculate where/what they should be doing
Really love that tutorial - highly recommended!
not everything needs machine learning though
you can do all sorts of cool things with just tree searching
Does anyone know how to get from txt file (example of map in map.txt by ref: https://drive.google.com/file/d/1Srzcsc3BOi7YqUD0Bu02Mm9Ph2uSdAU0/view?usp=sharing '.' is free cells, 'B' well, '0' empty cells) or from array of gameObjects using only c++ (without external libraries) walkable mesh? I want to create polygon which I will triangulate later. I'm using wall follow algorithm but also map generator could create a map with some holes, or if I would try make rts game like Stronghold (2001) or Dune II (1992). How should I create Navigation Mesh in this case?
I would ask this in #misc-dev
please dont just post random links
ai mb
Ok, ty
it does seem like ML is becoming an end in itself
oops
didnt scroll down
Anyone know of some good courses I could take to learn the AI stuff as a beginner.
game AI or ML AI ?
Both.
for game AI i recommend looking up the talk on GOAP
it's either on GDCVault or youtube (or both)
and ML AI there's lots of videos on youtube, I do like Sebastian langue's (iic that's the name)
Thx
Hey guys! Wanted to share my new devlog on my Unity game where I am training an AI to learn to dodge bullets using ML-Agents/Reinforcement Learning! I am kinda surprised myself how good it plays after proper training... If anyone is interested in this space, do check out and let me know what you think!
(smaller)
jejeje
Hi guys!
I have released the demo of ChatGPT-powered game "The Riddler" today. Check it out on Steam: https://store.steampowered.com/app/2348030/The_Riddler/
I would be grateful for any feedback you can provide.
ai need control hard ware parts
Hey guys, i have a quick question. So i've been following the tutorial of CodeMonkey to set up my first ml agent project. I've been following (i think) the exact same steps. But when he starts his first training you can see the ml agent walk around trying to find its way to the end goal (which is the yellow ball). When i start my ml agent it does weird things. like he makes quick jumps. but he is not supposed to move in the y direction. And he doesnt really move around he just makes weird jumps. this was his position at the beginning of his training and after almost 3 hours this is his position.
Hey guys! Wanted to share my new devlog where I share my experiences training AI agents with Unity's ML Agents. I am making a bullet dodger 1v1 2D game where a human and AI shoot bullets at each other and dodge each other's projectiles. Past few weeks I have learned a lot in terms of how ML Agents training work and I thought it's a good time to share some of the basic principles that have helped me train my models!
I'm super new to Youtube, so really appreciate feedbacks on the video and the game's progress if you guys are interested in the space!
https://youtu.be/6mP52zovozE
I'm making a game in unity called tic stac toe. After doing the math I realized there are way too many possibilities for a minimax algorithm to play the game and I therefore will either need to create an evaluation function to use with depth or a new AI.
My thought is to have a reinforcement learning bot learn to evaluate a board and then use that evaluation for the minimax algorithm then play it against itself over and over to learn to evaluate a board correctly. Is this the best option and how and where can I learn to do this?
I recommend searching up the game just to see how it works.
too many possibilities... to play perfectly. But does it need to play perfectly?
also from what I see it doesn't look like that many possibilities
No it does not have to play perfectly, but it has to play well enough and the only way I see to do this is to make use of neural networks with reinforcement learning. Also there are way too many possibilities, either 64 choose 32 or 31^16 (I think 64 choose 32 is more accurate but I'm not great at probability so not sure). Either way they both involve too many possibilities or game positions for even the most powerful computers to play perfectly.
how did you calculate that number?
you have a 4 by 4 by 4 matrix.
each turn you pick one of the 4 by 4 = 16 valid moves per turn.
the maximum pieces that can fit in the matrix is 4x4x4 = 64.
that's 64 turns.
64 turns * 16 valid moves per turn, gives total of 1024 moves to consider.
Well 31^16 makes sense as if you think about it the game has 16 "stacks" of columns of blocks going up. If you start with a blank stack and go up you realize the first block in that stack has two states either X or O, go up another stack and the # of states doubles. Either XX, XO, OX, OO. You can do this until you reach the fourth block but it will double each time. So adding up the states you get 2+4+8+16=30 add 1 for the fully blank stack. You then do 31^16 as 16 stacks exist.
up to a maximum height equal to the length of the game board
nevermind then, I thought the height is limited to 4
You're doing that wrong, it's not how statistics works. Let's say you have a 4 digit combination lock, what your doing is saying there are 10 possible numbers and 4 total digits so 10*4 = 40 but it's really 10^4 since every number you go up increases the amount of possible numbers
It is
so it is limited to 4, ok
im not sure how that game is analogous to a combination lock?
plus we're not doing statistics here, we're doing combinatorics, we're literally counting
when writing an AI what's interesting is not the total permutations of valid game states,
it's how many are reachable given a starting state.
Not the game, the amount of possibilities to consider. A minimax algorithm must consider every possibility (assuming there is no depth variable) the possibilities of the game is a very big number as every other move you make increases the possibilities exponentially.
every other move you make increases the possibilities exponentially
How so? To me it seems like the more moves you make, the less permutations we need to consider, because it prunes which states are reachable
and if you don't need the AI to play perfectly, you can do what chess engines do, and that's just put a limit on how many turns into the future it considers
It does prune which states are possible however after one move there are obviously 16 possibilities to consider, after another move there are 256 moves to consider this will increase exponentially up until the point where a stack is fully filled then it will be multiplied by 15 or less depending on how many stacks have been filled.
right, the exponentiality comes from the fact for each step it needs to look at the reaction steps for doing minmax
what if instead of minmax u use a different planning algorithm?
idk how fun the AI will be if it for example always tries to go for the quickest win (BFS)
Yes however chess engines have another AI to evaluate a board. This was my original question on how to make this. The reason they need this is because let's take normal tic tac toe, when you can calculate every possibility you can give a 0 for a draw 1 for a win -1 for a lose but it's not that simple if not every time you stop calculating is a win, lose, or draw since the game may not be over.
Adding depth variable means when the AI is done the game may not be, therefore there must be some way to calculate how good the move/set of moves actually is.
Unless you wanna pick a random move until the AI sees a win. With chess you can do this by setting the depth to 2 and saying pick a random move until you see mate in two then do the mate.
how much memory (RAM) do u need to do minmax?
It's not about RAM it's about clock speed.
?
if you say so
so how much time does it take it to simulate 1 game?
What do you mean by simulate?
play a game against itself, without any ui
or
decide what its first move should be
I mean if the moves were random under a second
i mean if you do minmax from the start
If using minimax without depth the universe will most likely not exist anymore when it's done
Less than chess but still way too much
how deep can it go in 1 minute?
1.2470110E16 possibilites
idc about how many possibilities,
how deep can your laptop go in 1 minute?
Nevermind that number is wrong. It's less than 2.1E11 possibilities
What do you mean how deep?
minmax is a strategy for going through the tree of possibilities. how deep in the tree can it do?
Minimax algorithm calculates every possible game (assuming no alpha beta pruning) if there is no depth
No idea how to even calculate that
you don't.
your math gave you a hypothesis.
now finish the science, and go do an experiment.
Whenever the sum of all the possibilities calculated so far = 2.1E11
Are there any suggestions for youtube channels to make AI code?
have u figured it out @neon fox ?
I just realized, the tree weights can be calced offline, and will probably fit in RAM, so minmax can be superfast
Hey guys! Wanted to share my new devlog about training a competitive AI environment with Self-Play with Unity’s ML Agents. The game is a 2D symmetrical environment where the character can shoot bullets and dodge the opponent’s attacks by jumping, crouching, dashing, and moving.
Those who aren’t familiar with how Self-Play works in RL - basically, a neural network plays against older copies of itself for millions of games and trains to defeat them. By constantly playing against itself, it gradually improves its own skill level + get good against a variety of play styles. Self-play is pretty famous for training famous AI models in many board games, like Chess and Go, but I always wanted to employ the algorithm on a more “game”-y setting. And I love how good the results are. It’s pretty fun to see my two agents play each other and out-flank one another for each kill.
I tried to play it myself, but I need more practice. (To be fair, the AI did play a million more games than me) I get lucky and hit it sometimes, but I die like 7 times for one kill. If you guys are interested in this space, do check out this devlog! Leave a like/comment for feedback (that helps the channel).
yo i made an ai called T-bot in c
Anybody aware of any AI tools for Blender? There's this simple one, that just gives input to ChatGPT then outputs and executes scripts from it: https://github.com/gd3kr/BlenderGPT
Also, any known pathways to go from Midjourney to PBR textures, besides Adobe Substance 3D Sampler?
Not sure if this is right place to post but I made a cool set of compute shaders for GPU based agent simulation in my free time.
Should allow large dynamic 2D maps with efficient flow field based agent ai
My quick benchmark of the utils and napkin math I can recompute an entire field on my a 4K map about 500x per second
So should easily be able to support AI agents with dozens of position based priorities in real time
I'll post a demo of a few hundred thousand agents mining and building and fighting and fleeing if people are curious
I've got nearest feature fields and connected components labeling working with both taking <2ms
I am thinking of a way of creating a monster for space invaders type game that could use a laser that will chase player. But i donot want him/her to die an inevitable death
ok
well
laser that chases
what if the player holds a mirror?
deflect it back at 'em
is anyone here knowledgable in tensorflow/machine learning and has any experience in magic the gathering? i have kind of a fun pet project that im working on and i have a very large dataset to train on, but i have no idea on how i would go about tokenizing the data, or validating any output
I did a course on machine learning in college. I didn't use tensorflow though
before you start kneading the data, think about the value function
what exactly are you trying to solve?
Hey, I'm making a facial recognition thing to recognize what is and isn't my face. The problem is that the more and more images of me I add to my dataset, the less confident my AI is that I am me. Here is my code to train the AI. ```python
import os
import cv2 as cv
import numpy as np
people = ['Me', 'Not Me']
DIR = r'C:\Users\me\Downloads\TrainingData'
haar_cascade = cv.CascadeClassifier('haar_face.xml')
features = []
labels = []
def create_train():
for person in people:
path = os.path.join(DIR, person) # the path = the path containing everyone + the specific person
label = people.index(person)
for img in os.listdir(path):
img_path = os.path.join(path, img)
img_array = cv.imread(img_path)
gray = cv.cvtColor(img_array, cv.COLOR_RGB2GRAY)
faces_rect = haar_cascade.detectMultiScale(gray, scaleFactor=1.1, minNeighbors=4)
for (x, y, w, h) in faces_rect:
faces_roi = gray[y : y + h, x : x + w] # we only care about this region of the image
features.append(faces_roi)
labels.append(label)
create_train()
print('Training done ------------------------')
features = np.array(features, dtype='object')
labels = np.array(labels)
face_recognizer = cv.face.LBPHFaceRecognizer_create() # use the opencv face recognizer
Train the recognizer on the features list and the labels list
face_recognizer.train(features, labels)
face_recognizer.save('face_trained.yml')
np.save('features.npy', features)
np.save('labels.npy', labels)
ceci n'est pas un 1AKron 😛
jesus here sup
how (un)confident is it? how does your input data look like? what is the distribution of you vs not you? how much data is there? what biases are in the data?
~30% confident for me ~85% confident for not me
Input is webcam and training data is around 500 images of me and ~60 of not me
~60 images of not you, is not a lot of images
are there meaningful differences between the webcam and non-you images?
resolution, color balance, angle...
Anyone here want to help create the AI player logic for my game?
Hey, I have been trying to add object detection in Unity, but I keep on getting stuck.
I am trying to train my own data and I have been able to create working models with tutorials, but then when I try to actually use it in Unity with Barracuda, the model cannot be imported. The model is in onnx format and I get this:
OnnxImportException: Unexpected error while parsing layer /Split_output_0 of type Split.
Unsupported default attribute split for node /Split_output_0 of type Split. Value is required.
Asset import failed, "Assets/Models/convertedModel.onnx" > OnnxImportException: Unexpected error while parsing layer /Split_output_0 of type Split.
Unsupported default attribute split for node /Split_output_0 of type Split. Value is required.
I have been trying to google for solutions, but I don't understand enough of it, to fix my issue
Can anyone help me with this? Thank you 🙂
what does it have to do with ai at all?
what do you mean object detection?
So lets say by example I have a mobile app. I let the user use their phone camera to look at a gameboard with specific characters, I need the App to identify these specific characters and know where they are (in the view)
Like this kind of object detection:
I have my training data set (annotated). And I have been able to train a model and then run and test it in google collab. But it is when I try to import or use it in Unity that I keep getting stuck
you probably want to talk to onnx
or barracuda/whatever lib you're using
Is there a good video to learn how to program more strategic AI to fight against the player
Or any good resources for things like MIT Battlecode and games like that
look up GOAP
Incredible, does this work for AI for battlecode and stuff like that
it probably will but
¯_(ツ)_/¯
What do you mean by "battlecode"?
this is a good collection of GOAP resources https://alumni.media.mit.edu/~jorkin/goap.html
this is a GDC talk regarding extensions that can be made to basic GOAP systems https://www.youtube.com/watch?v=gm7K68663rA
https://battlecode.org/
Also, thanks for the resources
Junior Aerospace Rock thanked luta
similar to battlecode:
https://store.steampowered.com/app/464350/Screeps_World/
I eventually solved it myself. It had to be exported as an onnx with opset 9.
This is crazy bro
uh... ? <@&133522354419662848>?
accidental text?
i guess 😅
@cloud ivy what was that 🙂
no
mfs got me mistaken lol.
not yall
the BROADCAST
smh
just figured id expose a little. just some of theyre medicine
one more for good luck?
if someone want a job. maybe we can work something out. seriously. i need to be cloroxed. covid has been around me since 12/13/22
im a bit overextended
i have tons of files and networking scans and screens and tons of ips and a list of all their domains and proxies
is this guy schizo
AI can sense the game world and react. It can do so intelligently. AI can do more than just compare to spreadsheets.
an AI can adapt paths that you record for it to take.
qed: AI can easily be better than most QA testers.
yes
continuous integration with automated regression testing is a thing, I dunno why the games industry doesn't do more of
but not all QA can be automated, some things still need a human in the loop, e.g. things that are hard to quantify
Ummm it would be interesting to see
QA is basically quantitative and binary, in the sense that being about counting things and if things work or not
thing is, some aspects of quality cannot be quantified to be measured by automation.
for example:
- is the game fun? frustrating?
- are the visuals pleasing?
idea: if your visuals were algorithim e.g. based on golden ratio or something, you could measure that
but yes you'ld need like a deep learning AI of some sort to detect the visuals as adequate. that may not be in the works for most studios to bother with
Honestly no clue if that's supposed to be a joke or you're being serious
it's naivety at its finest

step 1: make a not fun game
step 2: create a deep learning ai to classify game fun
step 3: use the ai to make the original game fun
step 4: release the most fun game ever and get rich
step 5: keep making fun games using the ai from step 2
neural nets are not magic.
it's just hill climbing for curved asymptotes
follow the algorithm !
Hey Developers!
I just made my game page visible on Steam. If anyone can give it a boost, I'll return the favor as well. Whether you already have or plan to release your games, just add it to your wishlist. https://store.steampowered.com/app/2466290/Fox_Run/
I would like to make a program that can assign kids to specific camp bunks based off of requests. Each camper can request 4 different kids in hierarchical order (they would rather be with the first request then the fourth one). Lets pretend there are 60 campers and 3 bunks of 20, how would I make a program that has access to each campers requests and would be able to assign kids to the 3 bunks trying its best to fulfill these requests? I imagine a search algorithm such as BFS would be the most affective here but I cannot think of how to implement it.
First you're going to need to assign values to the preferences. You need to know whether 2 kids with their second choice is worse/better than 1 kid with their 3rd choice. You choose these values based on what you think is right.
The problem you describe almost sounds what the hungarian algorithm is made for. It gives an optimal assignment for a single kid per bunk rather than multiple kids per bunk. You can solve this by just creating multiples of the same bunk (and repeat the preferences)
If you use a library which has this implemented it's as simple as just filling in the matrix. (Do look up whether it maximizes or minimizes value/cost)
That is the main problem I have though. I want to get the optimal assignment of multiple kids per bunk, I do not want the first kid I choose to have their optimal assignment of a bunk as the last kid I choose will have a likely chance of having a non optimal bunk. How would I create the best possible bunk assignments for every kid in the bunk?
Also I searched up the Hungarian algorithm and do not understand how it would help solve my problem.
I feel like I misread the problem, so after rereading it it's about grouping kids who like each other together in different bunks right?
Yes, kids request 4 others they like and would want to be in a bunk with and the AI must attempt to make a certain amount of bunks containing a certain amount of kids (let's say 3 bunks with 20 kids each) where each bunk is optimally made such that it fulfills the majority of the requests made.
Ok so I was researching the algorithm you said and found a problem exactly like mine on this website (it's scenario 1). https://www.google.com/amp/s/www.thinkautonomous.ai/blog/hungarian-algorithm/amp/. The website does not state how to solve it but if it talks about it here then that means it must be solvable with the Hungarian algorithm. Do you have any idea how it would work cause I cannot seem to figure out how it is similar to the example it does solve of workers and jobs.
Idk, the problem you meant seems fundamentally harder to solve than what I originally thought it was, so I don't think the hungarian algo would work
Is it important it's optimal?
Like I mean the optimal solution, not just a good solution
Does not have to be optimal just good enough
I'd just initialize a random assignment
Then iterate over each person and look for a switch which gives the most improvement
You can keep doing this over and over until no improvements are possible anymore
So like I'd give a score to each bunk based on how many kids are with kids they want. Someone's first choice is worth 4 points, 2nd is worth 3, 3rd is 2, and 4rth is 1. I'd add up the score for each kid and then swap kids to maximize the score. Is this your idea? If it is wouldnt this be O(n^n) which is an insane amount if the camp size (n) is like anything above 20?
More like O(k*n³) with k the number repeated iterations which I wouldn't expect to become large
If your data set is pretty small even O(n^3) complexity is not computationally large, unless you run it on 386 processors. First make a working version of your algorithm and then optimize later.
I'm using chatGPT 3.5 to do some coding. I'm having trouble asking it to revise code, it often seems to ignore the desired behavior I'm asking for. If I ask it to generate code, it's generally much more competent. How can I prompt it more efficiently?
generally I'll say something like "this is the behavior that's currently happening. This is what should be happening" I can do my own coding and I'll sometimes go in and fix it when it just won't listen, but I'm trying to use it as a tool to rapidly prototype. This is using the API if that matters, it's taking advantage of a unity plugin called AI Toolbox
remember that chatgpt is not an AI codegen tool
the fact that it can write code at all is just a byproduct of having read lots of code
Naturally, I am just wondering if there's a way to prompt in a way that returns better results
I'd need to see specifics
you might have more luck with gpt4
also, when you prompt gpt. The quality of your vocabulary matters.
If you use specific ("rewrite" instead of "write it again" [bad example but I guess you'll understand]) vocabulary, it's likelier to give what you want.
Read a book about the programming language you use and code yourself. This is the most efficient way. As other people mentioned ChatGPT is not a coding tool at all and you wouldn't get any decent application, especially when developing a game.
I can code, but I've been able to make a working character controller, stamina system, turn order system, and character switcher (alongside a simple damage system with floating text damage numbers) for a hybrid 3rd person shooter/ttrpg from scratch in probably 1/6th the time it would've taken to code from scratch. I'll need to go in and clean up the code and change variable names to follow my preferred coding style/remove some inefficiency, but ultimately I've saved a lot of time for some frustration with prompting and about 30 cents in openAI tokens. As the project gets more complex it will be less useful though, I have to make it aware of expected inputs/outputs with exact naming from/to other scripts since it's not aware of the codebase as a whole, but for early prototyping it's proving to be worthwhile
Who can make a AI like chat gpt
How much money u will take to make AI like chat gpt but in discord
OpenAI company. Please approach them with your offer.
ChatGPT has an API, which I assume you could call from a discord bot. Prices are available here https://openai.com/pricing
A normal well developed game has tens or hundreds of thousands lines of code. None of machine learning models is capable to handle the complexity of such a code. Therefore my suggestion to learn a programming language is valid. "Can code" is not always equal to develop a well designed, optimized and flexible application, which is definitely not a ChatGPT strength.
wtf that's totally exagerating... don't scare them away
you can easily get away with a few thousand lines and have a good simple game
boy, even a few hundreds if you're using 3rd party libraries
(which gpt knows how to use)
That said.. gpt4 can digest docs you give it and help you out based on them
We are talking about absolutely different levels of design and development of applications. Chatgpt is just a tool to play, it's not by any means an instrument for any serious development process.
uh... dunno if you followed the thing last winter but someone definitely made a game using chatgpt
jsut googling gave me a few results
Kosmic stated that ChatGPT is part of their development workflow. I'm not sure what the point of debating that is.
I haven't been able to incorporate ChatGPT into mine effectively though.. the API I'm working with has changed quite a bit over the last few years, which makes it ineffective at coding.
I do chit chat with it about design though.
oh really?
People even developed flappy bird in excel but it doesn't mean that it's a proper tool for development 😎
I haven't prompted it for code in a while
yeah we're not discussing if it's a proper tool
this is literally #ai-dev channel
dev with ai, or dev some ai
what's the differencelel
(there's one)
I attempted to have it guide me through implementing ~~GOAT ~~ GOAP . It was helpful when I got confused, and wanted to think aloud. Personally I use ChatGPT mostly for rubber ducking.
yes
it'S an awesome rubber duck
well it was when I used it
however sometimes it can say something is possible while not being possible... it took me the longest time to realize something I was trying to do in c++ was just impossible... with chatGPT telling me how to achieve it.. :|
Yeah, it can certainly be misleading. I don't rely on it for any specifics.
I'd hope it would provide better results for a language as mature as C++, but I guess not. I'm just using it for lua / Roblox right now. At least the lua language itself is simple. The Roblox API changes too much for the generated code to be useful (other than providing a structural example)
I've only attempted to ask it tricky stuff though.. maybe it would have a higher success rate with easier questions (the stuff I don't need help with 🙃)
ye I asked it a few technicalities it should know
wonder if it could spurt out some kind of a* algo
gpt is actually writing me one in python right now lol
it'S probably stolen from a repo though
Chatgpt can handle easy tasks for Python for sure
I wonder how games like World War Z had such massive amounts of zombies at a time without absolutely butchering performance. I looked into vertex animation that might be promising but idk, anyone have anything they can point me towards?
instanced rendering
I'm more so concerned with the logic portion/CPU bottleneck. Doing some research, I cam across Entity Component Systems and am looking into understanding those, seems real promising.
unless your entities are very varied that won't make much of a difference
basically you will want to do things like make your entities simple (esp pathfinding wise), and possibly update far away enemies at a lower update rate (30hz vs 60hz)
Hey, do any of you guys have any resources on infinite axis utility ai? The guy that is the main proponent might have closed all references to it, can't access any of his articles nor the GDC talks...
Does anybody have access to this video "Building and Traversing Navigation Meshes with Recast and Detour (Project Video)" ?
http://web.archive.org/web/20190705132345/http://aigamedev.com/insider/presentations/recast-teaser/
Hi guys, I'm struggling on a issue, maybe you can help. Not sure if this is the right section, but it does involved AI, I'm looking for function, that given a array of vectors it would return a random vector position within those bounds? Does that make sense? Something like this:
function findRandomInArea(positions:Array<Vector3>){
// should find a random position within the bounds of positions where each vector represent a "corner" of an area
}
let area = [
new Vector3(-14, 0, 3.6),
new Vector3(3.7, 0, 3.4),
new Vector3(3.7, 0, 15.3),
new Vector3(13.45, 0, 14.63),
];
let randomPoint = findRandomInArea(area);
is the area guaranteed to be planar?
e.g. not this
yes, just x and z, Y is not used
so you can probably find the mapping from cartesian to your coordinates (linear algebra) and just pick a point in a rect and map it to your coords
alternatively you could treat it as a polygon and just pick a point between the extents (min/max) and do an 'inside' check and retry until a point you pick is inside
there's probably another way
anyone here familiar with minimax algorithms and can help me out with an issue im having?
Is there any place where I can look at examples of complicated Behaviour trees used in a game? I understand the node types and decorators. But seeing actual examples of production grade BTs would give me a better idea of how they can be used.
!dontask
I would look at GOAP
That's for dynamic AI that thinks up action sequenced by itself. I want to have more control over how the enemy acts and there is no need for complicated dynamic AI.
Even simple stuff would do. Like exmamples of various BTs utilizing the parallel node.
i mean a behavior tree is basically a state machine
Make sense, I've approached it differently to avoid any performance issues (calculating another random point of the area every time an entity arrives at destination) and just having a list of hard coded point of different parts of the area works well and has the advantage of being very simple (not very dynamic though). Appreciate the answer in all cases @patent blade
Create a logo for pitbull dog
?
So you can see the results 🙂
Hello everyone! I have a question, how to make an advanced ai of an airplane that should follow public var waypoints? in a way that, if the following waypoint is on left or right, the airplane should adjust its roll yaw and also the pitch (based on the waypoint altitude), so it can smoothly go to that waypoint, I am specifically searching for a PID solution, if you could help. Thanks.
Btw, I am a Unity developer and I use C#, I already have my code, but it's actually very simple that it don't do what I want it to do exactely
do u do physics or is it on rails?
physics
Hey I was just wondering if anyone here has run into or knows how to fix this issue. After following this tutorial for installing tensorflow with gpu support, https://youtu.be/hHWkvEcDBO0, I ran into an issue of tensorflow not being able to actually find my gpu. I ran this code python physical_devices = tf.config.list_physical_devices("GPU") tf.config.experimental.set_memory_growth(physical_devices[0], True) and it gives me an "IndexError: list index out of range" which I assume is because physical_devices is None since it cannot find a GPU. What is the problem? I have followed everything in the video and definetely have a gpu but tensorflow will not find it.
working on this was fun
https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-3-5-ray-reconstruction/
has anyone integrated similar tech?
did you work on the nvidia side or the cdprojekt side?
Nvidia
nice, the game I work on doesn't relaly have any raytacing but I've worked w/ nvidia in the past
Feels like a million $ :)
https://www.youtube.com/watch?v=_Tx_wrQmX6s
Heya, are there any assets for 3D patrol/chase/attack and possibly other states for AI that you guys can recommend? I tried googling but there are sooo many of them. (for Unity)
My major issue with AI coding if let loose for a whole program is that at some point one might not be able to specificy features enough and then you have to manually add it. At that point you have to dive into a completly unknown code-base and it's unsure if the design of the code is good enough for a human to understand. I think so - since it's trained on human programming patterns. However it's still an "unknown" codebase one has to dive into when it comes to tip of the iceberg features of special business code.
It's the same with AI generated art. I use it to some extend and then switch over to Krita to make my final adjustments. However it's trivial to adjust an image compared to a codebase.
im learning neural networks , does it help in ai programming in games as well ?
if you're thinking of AI logic for like enemies, no
then what should i be learning for enemies and bosses
things like GOAP and behavior trees
imho: state machines, graph theory
isnt there like some course which covers all this topics
like i dont know what all should i be studying
No probably not
Have A.I. analyze voice, behavior, and account information to determine age of user, and only queue similar age range in the matchmaking queue.
Mandatory for games that rate for Mature/Adult
Tired of dumbass parents not monitoring the games their kids are playing. You intend to create a space for adults to have fun and mingle with each other online by rating your online game M/Adult, but then we can't actually have those experiences because other dumbfuckery parents let their too young children play them.
Myers-Briggs 16 Personality Types - Personality Test - Game-ify? Gamify?
Main Characters = INFJ/ENTJ/INTJ/ENFJ/ENTP/INTP
NPCS = ENFJ/ESTP/INFP/ISTP/ENFP/ESFP/ESTJ/ISFP/ISTJ/ESFJ/ISFJ
INTJ: The Architect, God, Brain, Source, Chief Memeplexer
INFJ: The Advocate, Christ, Chief Scapegoat, Mediator
ENTJ: Chief Executive, Evangelist, Middle Manager, Aspiring CEO, Apostle Paul
INTP: Rock Star, Counselor, Wise Owl, Yoda
ENTP: Satan, Failed king, Jester in court, Red pilling truth teller, Joker
Fact 1: Eve was an ENFP muse in the primordial walled garden story.
Fact 2: The ISFJ SHeeple chooses who the proxy leaders are.
Taylor Swift is ISFJ who may assist in picking the next president, as long as it's not Trumptardians or anyone of the like who are ignorant tyrants that don't deserve leadership because they lack the education and technical understanding to navigate our society and understand how to harness and make decisions with the technologies we have available to us. His IQ is too low.
I SiuMoi have taken the Myers-Briggs personality test several times. It's never consistent but it's always either INFJ-A or INTJ.
Some say I may be slight on the spectrum high functioning that I'm able to flip between the two types.

I have a conversation with an art A.I. Developer regarding this topic: Age restricted matchmaking queues for online gameplay on games rated M/adults - Since we can't control how dumb parents are at allowing their kids play games they are too young to play, the problem needs to be dealt with game developer level and the matchmaking process.
I find it irritating this is text only and I can't just screenshot the conversation and paste it here. The conversation happened on a public forum.
[1:42 PM]!~♡SiuMoi♡~!: I literally would love an internet age 30+ only PC gaming/internet hangout place that is strict on that age limit.
[1:42 PM]!~♡SiuMoi♡~!: Tired of people claiming to have an adult only atmosphere and yet people too young still slip through because of failed parenting
[1:43 PM]!~♡SiuMoi♡~!: This problem triggers me badly
[1:43 PM]!~♡SiuMoi♡~!: I yell about it
[1:43 PM]!~♡SiuMoi♡~!: in fact, i just did yell about it
[1:44 PM]!~♡SiuMoi♡~!: Games need to start being more strict. You claim "Mature" and serve online content where "Mature" people go to interact and there's underage people on there, well, we can't have "Mature" interactions now can we? We have to censor ourselves because some dumbfuck parents let their too young kids play a game that only adults should be playing.
[1:49 PM]!~♡SiuMoi♡~!: Fucking deadly virus able to jump thousands of miles of ocean to get to a land on the other side of the world on Trump's watch, fucking up reality and life plans and life goals of EVERYONE. The internet becomes our hangout because of it,, and we can't even have that space because of dumb parents
[1:50 PM]alaricus: That's failure to reliably check for age of new gamers and sieve the gaming community
Wait what is preventing you from screenshoting it
It is possible that, while the game being mature, regulation of vocabulary and such are there for more "healthy" engagements and less toxicity
I'm working right now but this is an important interruption
Idk why that made me laugh 😅
When I press the button to add screenshot it won't let me upload it. There's no option
Back to work
To discord? That's odd is working for me
But yeah have a good work session
Probably role related
Shhh back to work! 
yes, #get-a-role to assign one to yourself
Lunch break!
I am unable to find my role of "PC Master Race Boss lady" 
@worthy wind cool name!
I have Advanced Tachyon Technologies disks all over my house. Is that the same thing as in your name?
Haha, not the same thing but same vibes i guess
No. If you're out and about hanging out with your adult friends and no underage people around, do you think any adults are monitoring each other's vocabulary? No, too busy having a good time, whether it involves foul language or not.
Also, think about this aspect of reality because of COVID.
Think of how much of our life choices and reality had to change to accomodate the circumstances and keep ourselves as sane and productive and safe as possible through it all? All our plans and futures altered because our leadership at the time failed when it really mattered and still want a trophy and to be revoted in as leadership.
I've had COVID 3 times now and I work from home and barely ever leave my home, and I have above average sanitation habits, and am always the last in the household to catch whatever is spreading around.
I can't go out and socialize normally with my adult friends and peers. Everyones focusing on survival too much and not enough time to actually enjoy each other.
Online rated Mature PC games that allow for more advanced interactions than the simplistic PC games that flood the market are a great outlet for us adults, but it gets ruined because the underage population on these games are not controlled. You screw us out of an outlet that can help our mental health during these times.
I don't know about the rest of the PC gaming population and how others handle the online toxicity, but I've been dealing with online multiplayer gaming toxicity since I was like 16. I'll be 39 this year.
I have the mental fortitude to allow adults in adult environments the space to vent frustrations about life that no underage people have a business in being privvy to because they should be focusing on learning and having fun, not being involved in adult environments with adults trying to help each other and interact with each other to keep existence going for the future generations.
I mean, i agree in the sense that monitoring vocabulary is not the ideal way to go about it, but rn is the best approximation if a game doesn't have proper reporting system that works and can't be easly abused
But at the same time, if games have the option to mute hide others voice or messages that is an ideal solution to me
If you are having trouble, just go to settings or the designated menu and hide messages from x player
Or mute his voice
Don't know why it isn't enough now that I think about it
All it would take is one team to create an open sourced matchmaking platform involving a.i. that monitors in-game communications and behavior and can also access user account information (Steam/Battlenet/etc.) They have to enter in billing information and birthday creating these accounts.
A.i. will be able to accurately determine the age of the user, and then group them together with similar age for matchmaking queues
I am willing to wait as long as it takes to queue with only age 30+ users
Feel like is kind of unnecessary and intrusive, if some people and companies were OK with just hide mute option, because solves all problems imo
The voice thing is also probably easier to achieve than behavior
no it doesn't. what if a game has sexual content? You can mute the user, but do they still hear you even if you mute them? will they hear conversations if they are muted?
And if I'm not wrong there is already detectors of vocabulary in voice chats
Not sure if i follow
but are those detectors using that voice data to create matchmaking queues >?
Ah
I see
That's an interesting proposition
But yeah I'd asume the user still hears you if you mute them, unlike discord where it hides your message by default but i can't seee why options can't be detailed to that level of choosing tbh
sigh... i really don't want to use my personal experience recently here as an example
If you don't want to be heard then mute yourself
of why your solutions don't work for every game
its embarrassing and not for anyone underage to hear
Don't think muting others should mute younto them by default tbh
Ummm
I don't understand the sexual content part btw
But I think might make more sense in the relation to allowing access to the content that i hadn't realized before?
I own probably about a thousand PC games across different platforms. I've experienced a lot. The advanced multiplayer games, the "mute" and "modify behavior" does not always play out well. an advanced age restriction matchmaking system is the better solution
If you support the mental health of PC gaming adults that give a shit about society and our progress as a human species, please, consider developing this age restricted matchmaking for online multiplayer pc gaming. Give us the space to be adults and hang out with each other without the underage people around.
In some games- the in-game characters, (not voice chat), say things or say things in certain tones of voice that is not appropriate for underage.
Sometimes these words or tones of how the words are said triggers reactions in adults.
The game doesn't need to be sexual in nature either, but it still happens
Imagine reacting to something you didn't expect to be said or in the tone it was said in because of an NPC character, and there's underage people also in the lobby, but the game is rated M for mature?
It's so fucking cringe i can't fucking stand it
PLEASE FIX MATCHMAKING
i'm so triggered right now i cant
Maybe the problem lies in parental control, not sure if it should be up to the development or publishing party to do that. Make and publish the games for your intended audience, and let parents sort it out
wrong.
And the mentality that you are pushing, if all the other devs follow suit, that's a huge problem
id imagine government will have to intervene if the underage exposures to inappropriate content is not controlled
and i'll be cheerleading them on as they go after
I got many years of experience, I can sit on a court stand for a long time just recounting all the bullshit i encountered with an unfiltered internet and gaming matchmaking system. The horror's i've overheard and the cyberwarefare and bullying I've endured
Government tends to gush over this too much
Enough ammo to actually make moves and actually force some sort of control and oversight
They'd love that
Yup
anddddd
I heard gta 6 could be implementing some detection sysyem to allow access to content but don't know if that's true
i have my local government under my thumb because i went through all the right channels to protect myself and my child and they failed, and they failed on cameras recording.
i got them all in checkmate
so ifi really wanted to make some moves, i could
but i prefer not to go that route
i prefer devs understand and trust my experience
and just do it, so i dont have to actually go the legal route to get it done
do u guys have any tips on optimizing LLM outputs? Right now I'm running the initial LLM output over a few more LLM models, one to answer yes/no on some questions, one extract instructions based on the narration text, and one extract suggestions for the player, so every player input requires me to re-run the same output over 3 models (a total of 4 LLM calls), sad
Hey so I'm working on an action RPG in the same sort of genre as games like Secret of Mana. It supports 1-3 player local (couch) co-op with AI allies if you have less than three human players. I've been making gradual improvements to the way the AI works - and I still see areas that could be improved further - but I'm afraid I've already made the AI too powerful! In this genre you need to walk a tightrope where CPU allies are good enough not to frustrate the player(s) or hold them back, but not so good that they can do the player's job for them and trivialize the challenge. I'm afraid I may have made them too competent at this point and I'm not sure what I should do about it.
Here is a video of my current build so you can see for yourself:
https://youtu.be/pX007lW_hLs
If you want to get a feel for it yourself, you can also play this preview build in your web browser at https://gx.games/games/axprhn/maru-island/tracks/ad6b400e-6e78-46ac-a927-71378f5cbafb/ (controller highly reccomended but not required)
I'm playing the blue haired character in the video and my AI is controlling the other two. The AI won't attack enemies unless a human hits them first, so in this video I go through "tagging" enemies with a single attack and letting the AI handle the rest. And handle it they do - including the boss fight - with little difficulty.
My question is twofold - do you think the AI is indeed too powerful right now, and if so what are some of my options for dealing with that? Thanks for your input!
(please ping me if you reply!!!)
Wouldn't say the good AI is a bad thing, but you probably don't want players being able to mostly sit back and do nothing.
What I'd try to go for is let players rely on the companions for short periods, but require the player to actively fight. Same with boss fights, it's not bad if companions do the majority of the work, the player just needs to chip in every so often.
The way I'd go about it is to keep the AI good as is, but artificially weaken the companions. The most direct way I can think of doing this with the goals I mentioned above is with a hidden stamina bar.
Companions' attacks would require and drain stamina, while the player landing hits gives back stamina to the companions.
Some extra things to consider:
- Different stages of companion effectiveness (attack rate), to smooth out the stamina penalty
- You'd probably want to communicate this somehow for example with companions calling for help at low stamina during battle
- If the companions remain good at avoiding damage, you could add an out of breath state which allows enemies to get easier hits in
- You could play with a longer term stamina bar which affects the (short term) stamina regeneration. For example if the player has been doing the minimum amount of hits to regenerate stamina for a long time, the penalty would be requiring more hits for the same amount of stamina until they get a lot more hits in.
My favorite "hands off" companion system would be something like FF12's gambit system where the AI is highly configurable to the point where for all but the hardest battles, it's more about reprogramming tactics than actually controlling any of the characters. I always kind of liked that system, but I know it's controversial and I'm probably in the minority there.
You can sort of imagine a continuum of options between gambit-style "player codes all the logic" and fully unassisted AI without artificially weakening the AI. Like in Secret of Mana where there's a grid for behavior selection between a few options, but the AI handles it from there.
Hi all. Fairly new around these parts. Been working on my AI for a war like turn based game (think Advanced Wars). Thought I would do a post about what it looks like and wondered if people would be interested: https://www.realm-tactics.com/development/ai-dive-in/. The whole project is very much a work in progress - any feedback would be very much welcomed.
Anyone just want to talk through some fun AI ideas for a game I haven't been working on for months?
I am learning more about different games AI. Sounds interesting
Hey!
Well I talked about it a bit in the gamedev chat thread but I'm up early so why not
Basically the game I'm working on is a sort of unique hybrid of action roguelike and rts
Where I have leader characters that need more sophisticated AI and units that still need to behave fairly intelligently
Imagine basically a player or ai controlled unit with a bunch of units that follow them around and fight or hangout around buildings doing work
So the challenge is I need a pretty efficient system to handle all the units since there is going to be a large dynamic map filled with them
Are you familiar at all with flow fields?
Interesting. Not specifically familiar with flow fields but just had a quick read and get the idea (a background in physics helps)
So your game has enough entities to consider that doing it all in an individual entity way would not be feasible?
Exactly
Or at least that's the design
My prototypes all just work with fewer units
But my hope is to really crank the number up
Basically I want to use flow fields as the basis for highly parallel ai. For example, imagine you have a bunch of workers harvesting trees around a structure, one fairly clever way to do that is to compute two flow fields, one that goes towards trees and one that goes towards the structure where they are collected
Those fields shouldn't change that often and when they do you can recompute only the stale portion of it
My idea is to handle the ai by having the agents be fairly simple and state based
Where state controls which fields they follow
In that simple example you might just have two states, carrying resources and looking for resources
One will allow the agents to follow gradients to trees and the other back to their work structure
Ah yes I see, so you can compute these flow fields at one point/when things in game change and the agents can lock on to which fields they need to. That seems pretty efficient
It should be but the bump I'm hitting is that I want the behaviors to be a little more complex than just caring about a single field. For example, my workers should run inside to avoid enemies and they should avoid going towards resources that are near enemies.
Its a 2D based game right? As in your fields will be vectors on a 2D space. So when you say enter an area, will all of these be calculated based on what is around.
Exactly 2D (or at least logic updates in 2D)
Ahh yeah. So that would be more of a dynamic field or maybe a hybrid approach where there is also an added vector based on more dynamic factors which can be calculated on the fly
Yeah exactly. The sort of naive and maybe good enough solution is I could compute scalar fields for different things based on distance to nearby obstacles then take weighted sums of the fields and use gradient descent
That would let me do things like avoid going near resources that have enemies near them (for example)
The main downside to that is that it doesn't really get the full benefit of a path based solution the way flow fields do. Since each field is independent we don't really get a vector field pointing us along the most efficient route to an objective. We just get a fuzzy avoidance/attractor behavior from various unit priorities.
Yeah that would be one way of doing it. I guess I imagined that if the 'dynamic factors' like enemies were few enough then doing a quick whip round of all entities and adding a vector pointing away from them with say a magnitude reducing with distance from them. I don't know if the numbers you are planning for that would be too much. I guess that your gradient decent way would let you do it more globally - calculating the full field per tick and then using that for each of your units.
I think that methods such as this are also quite hard to balance. I assume that in some scenarios you would want your units to plunge in to nab resources near enemies. Are there factors in friendly units around and stuff? Like safety in numbers of 'fighting units' which can protect them (not knowing much about your game I am making assumptions
Those are great questions
I am making a 2d grid based battle game. The idea of what the individual unit wants to do and the overall picture with what the whole team wants to do are quite tricky to balance. Its a battle type game so the idea of sacrificing individual units for the overall tactical position is an interesting problem to solve.
I was hoping to have a sort of hybrid where you assign them jobs and they do them effectively (going in groups or fleeing when appropriate). But where there are some behaviors that might be less desirable (like units that are in your army might loot and pillage depending on their disposition)
Oh that's cool
Yeah I haven't really gotten too deep into balance yet and might be overcomplicating it. It might be fine to just have them use flow fields and be fairly dumb for now
And then just add combined fields later only if it is problematic for gameplay
I want things to feel really big and dynamic and like you are in a giant simulation but I don't know that they actually need to be that smart to give that feeling
What do you do for that in your game?
@polar timber I was experimenting with behavior trees (well more like behavior tree based states) for the leader AIs
So that there are characters running around and ordering units to fight or build or collect according to more complex nuanced logic but the units themselves then just follow simple state based logic once assigned
Yeah I have adopted a fairly agile way of working. I have 'Features' which the AI considers. And add features when I think that it should take something into account. It goes a bit deeper into like a bidding system depending on the sort of tactical action it could take but effectively is fairly stupid in that it just considers what would happen if it took different actions based on feature weights.
Your idea with the flow fields I think is a really good way of going. Yeah and its hard to know how it would feel when there is loads going on. Usually each unit doesnt need to be that smart for it to still feel like a lot of things are going on
Ooh that's a cool system. It's always nice when things are flexible enough to add onto later without a major rewrite.
Yeah I should build and test with the simple flow fields first.
Do you have any external things yet so we could have a look? I have a blog but I am very lax at doing anything with it at all
To be honest I haven't touched this project in about half a year and I'm only now picking it up to try again. I can dig up the proof of concepts I built for various mechanics but I'd also need to get some screen capture software installed (since I have been developing for myself and not thinking at all about marketing)
As much as I would love to finish this project realistically I don't know that I'll have time to do so and so I am not sure it makes sense to share (unless you think people would be interested in the technical side of things and seeing progress even without the promise of a finished game)
I am very much in the same boat. I certainly want to get it out there but its mainly for me. I get 'too bogged down' in the technical side as that's what interests me and other aspects dont get the attention that they should!
Yeah I relate to that to a point. I spend a lot of time designing the game and figuring out the interesting technical changes of the design but then I have trouble maintaining interest once I've done all the fun parts.
What do you think about AI approaches in diplomacy strategy games?
I prefer HTN + utility over GOAP, what is you opinion?
The player has some neighboring countries. They can be friend/ally or enemies. The player can trade, invade, make sanction, change policy, customs, tariff, entry permit, etc.
and is there any framework/tool for HTN in unity like planner or I should implement it myself from scratch?
About utility AI, I have seen it is generally implemented and the score is calculated using AND operation (multiplication) between consideration scores. Is it correct?
Is there any more complex approach to combine them?
For example: C means consideration score
R1 = (C11 * C21 * ...* Cn1)^1/n
R2 = (C12 * C22 * ...* Cn2)^1/n
Rn = (C1n * C2n * ...* Cnn)^1/n
R = Max(R1,...,Rn)
@quick latch I think it really depends on specifically what you want your ai to do. HTN is, in my opinion, great for creating really diverse routines when you have specific situations you can easily enumerate and want control over how your agent behaves in each.
GOAP is, I would argue, a technically more general problem solving AI in that it is performing a dynamic search of action/state space to navigate towards goals.
If you want to simulate many agents you may want something completely different like flow fields or scalar utility fields that you can compute globally then use trivially for batched groups of agents.
Each of those things has its place depending on the game.
I don't see any obvious reason not to use a different scoring computation other than compute vs behavior trade off. What advantage are you expecting to get with C means?
No, my question is about combining scores in utility ai. Generally, they are combined using multiplication (geometric mean) but it can be more complex.
Define different considerations and group them, for each group calculate geomatric mean and finally use Max
I want to know if it is common or not
This game is a diplomacy game. I just want to select a good diplomacy based on situations and the world state among several diplomacies
I can't speak to whether or not it is common but that seems like a reasonable approach.
Something similar is commonly done when ranking heterozygous data. If you want to rank things in a way that is deterministic but you have multiple types of data and multiple different ways of ranking different types of data then a normal comparator will not be commutative and your odering will not be deterministic. A common solution is to group similar elements together first and use a comparator per group to order those elements. You can take the top result of each group and rank those using another comparator. This can be done recursively as much as you want and it is a great way to impose well crafted ordering constraints on highly heterozygous data.
Using a standard approach like utility ai and defining your own formulas seems like a good way to go for a diplomacy game so I'd say you are on the right track (or at least a reasonalbe track).
Does anyone have a way to do pathfinding through a 3 dimensional volume like an asteroid field?
I found some very vague and theoretical solutions on google but nothing in engine
Using godot btw
no different than 2d really
you'll have to make a grid
that said, you can probably just do avoidance than real pathfinding, since most of the space between you and a target is likely empty
Please elaborate on avoidance
raycast ahead, see if there's obstacles, decide to go left or right, move enough so that you're not going to collide
can maybe pathfind that bit
what does your asteroid field look like?
It's randomly generated
Maybe I can have pathfinding on a 2d plane and then avoidance vertically
you can also use an octree/similar to make a grid
I have no idea what that is
looks like this page is solving a similar problem to what you wanted
Thanks for that, i may do a different idea because godot implementation is sparce
samarinara_ thanked cobalthex
you'll have to diy
<@&133522354419662848>
I got randonly pinged here wtf
apologies, discord autocomplete is stupid
it thinks that myoshi is the same as mods
working on ai character emotion generation
cool
sry I moved the app here: https://fyrean.itch.io/matxinh
does anyone have experience on how like difficult/complex it is to make NPC behavior for a settlement builder sim like going medieval?
how difficult is it to build that system and have it be flexible for more implementations of behaviors?
Hii
whats the best pathfinding algorithm if i dont have weighted paths but have a big map? i need one which is not too overkill and is easy to implement
bfs?
gradient descent?
what graph do you have and how do you define "best"?
no graph yet, i have a 3d game level
as for best, i want something very simple to implement, sorry for not clarifying that
what u payin
Absolutely not you with that attitude lmao it is $78k USD for anyone else interested
lol
in money or in crypto? 🤣
what do you mean by that
man asks how much you'll pay him to do work instead of going "yes daddy please let me do free work" and its a bad attitude
wrong channel too go in #hiring
@supple mesa Dude gets mad because someone asks what the pay is. Fuck outta here
Also, low balled the fuck out of that salary for AI dev. Data scientists make at least twice as much as that right now.
good luck baby girl
Game AI dev isn't really in the same pay league. The salary isn't bad, though I'd take care with the dude claiming to provide it.
You certainly weren't in the wrong for asking up front about the pay.
Hey all
If you wanted to tell ChatGPT that 'You are now writing in the style of Ernest Hemingway, all responses should be in that style'. What exactly is it that it does ?
Im wanting to write a 'personality engine'. But im not sure what to do to go about it. Should i just make a list of personality traits, log all corespondence, and then have it auto-deduce style cues?
What about handling 10's of thousands of interactions at once ? How to delegate such things?
Is a 'User Agent' applicable in this instance ?
im not sure of my nomenclature
handling 10's of thousands of interactions at once ?
that's going to break the bank.
It's autocomplete on steroids.
Conceptually it remembers text it was trained on which is associated with Ernest Hemingway, and autocompletes from there.
What is a "personality engine"?
At least proof your bot generated pitches before posting them 
this mf is the final boss of shitty managers
ay im the lil p diddy
Anybody else tried edge ML for their games yet with any transformer models?
Had some surprisingly good results on some prototypes by bundling llama.cpp and a quantized version of Google's Gemma 3 4B model.
Mostly used for agentic stuff, but can work for direct chat, comparable to GPT-4o (API version) in surface level conversations (possibly more).
And it can do vision, with some success.
Works with some VRAM to spare on other apps on a 6GB GTX 1060.
Results with sample image.
Qualcomm also announced some performance improvements on mobile devices for llama.cpp on mobile last month:
https://www.qualcomm.com/developer/blog/2024/11/introducing-new-opn-cl-gpu-backend-llama-cpp-for-qualcomm-adreno-gpu
Currently planning deployment as a sidecar app for an Expo frontend with React Three Fiber.
This little thing also helps with the agentic parts: https://huggingface.co/blog/smolagents
Curious if anyone's trying a similar setup or use case.
Hello, I am a programmer professionally and now I want to make new projects in the field of AI / ML. I saw lot great opensource video / audio / text models and I want to work on a project related to all of them. I am not really good at maths / physics even in high school and never took that seriously. Would my lack of those knowledge hinder in learning AI / ML? I am open to learn the topics that are required for AI, but don't know how much needed to understand and implement my own models / fine tuning / reinforcement leanring?
Re: Math. Yes and No. Do you want to fully understand what the code is doing? If yes, then you better study and fully understand matrix operations, differentials, and integrals. Otherwise you'll have no clue what's going on.
Physics depends on the project/data.
If you cannot read this or understand what any of this is, when you get a data spec sheet of the model training and results expecting or giving the data scientist the results of the training you will run into a "Umm... help?" moments often. This from someone who spent 2 months trying to fumble through that through random youtube videos. Better to dedicate a good focused period of really trying to study it than just piece-meal it like I did. For context, I failed Algebra II in high school. and never went beyond that even in college.
Yeah, I don't understand any of it, but wouldn't it take years to learn Maths topic itself?
No, focus on those three aspects
Depending how long ago HS was. My mature brain was more ready/due to my interest in the field/and just less hormonal issues. You said you're a professional programmer, so you have developed the analytical mind to understand the concepts and you'll draw parallels with programming when you review it now. At least that was my "Ah ha!" moments when things start to click.
You also have a 1-to-1 connection of WHY you're studying this math concept. As opposed to the age old question of, "When will I ever use this in my life? Why bother?" Framing and focus is huge in this aspect.
Most of that is true with me too, its just that whenever I see Maths / Physics, I get these imposter sysndrome that I am not good and probably can't compete in it with someone who were studying it since beginning
Reframe the statement. If you were learning a new skill, such as woodworking. If you don't ever take a woodworking class because "carpenters have been doing it for decades before me" so why bother? Or any other skill you learn as an adult. Cooking, driving, art, dancing, whatever. Frame it like that. You just now have the time/maturity/to dedicate towards that learning.
I'm a firm believer of "we invented computers and calculators for a reason". I never fully learned my multiplications table and can't rattle off the solutions to any of the 7x (or higher!) whatever numbers. I've been in the game industry and developing/programming for over 25 years now and did very well. Hasn't stopped me from learning and progressing 🤣
Thank you so much Specter for your amazing advice, you are awesome my friend ❤️
hilittle thanked lordspecter7365
You're quite welcome! Good luck, it's a struggle at first, keep at it, because once it clicks. You'll thank yourself and be that much more in "demand" in the field.
lordspecter7365 thanked hilittle
Thank you, I am personally interested in doing my own projects into it, espacially with audio and video
hilittle thanked lordspecter7365
And this is the best time
Good, in a couple months I can bug you for video help. I've stayed away from it for now because that just was another level of complexity I couldn't focus on at this time. By the way: if you have a Gemini Advance/Pro whatever subscription. Check out: https://labs.google/fx/tools/whisk On the left you provide a "subject" image, "scene" image, and "style" image. Or text-to-image generate right there on the side bar what you want. Then in the bottom center panel, tell it what you want the video of those three things to do. Fantastic results and great if you're editing a bunch of small clips to make a narrative story video short. Like two characters talking and changing scenes and the like.
Great Man, I really like Gemini Stream screen API, I think I could build a lot of potential applications using that in education
Friend me off server.got a question for you regarding that. Because I might have someone you want to speak to possibly or at least keep a contact for the future.
I sent you the request 🙂
There's plenty of AI that can be done with just graphs. How are you at DS&A?
I am decent with DS&A, but can focus / learn that as well if required
Neat!
Here's an exercise:
https://en.wikipedia.org/wiki/Missionaries_and_cannibals_problem
Write an AI that solves it for arbitrary amounts of missionaries and cannibals
So this is about artificially intelligent things. Nice. X E.
Hello everyone, I am in my masters program for ML and DataScience and working on a crack detection project for my thesis. The application would be able to take any image of a building with cracks / spanning etc. and detect the crack type, severity and mark it on the image.
I collected my own + from industrialists around 5500 Hight quality images of different cracks, spanning, etc. and annotated with in an XML file. So I have two folder, the annotated xml's and Images.
In the past I had done text based Transformers model, however, I am getting a bit confused on how to proceed with the project from here.
How can I train my own image model from here to achieve the desired results. Also, Does the 5500 high quality collected images enough to train the model and achieve at least 80 - 90% accuracy? Given the project and current implementation, can I be asked how come my own model different from lets say putting the image to openai 4o image to detect the cracks, spanning etc.?
Specialized models tend to be more efficient and cheaper to run (especially comparing lighter CNNs versus heavier transformers), sometimes with better accuracy, compared to more powerful general purpose ones.
That's before getting into additional privacy, compliance, and deployment options.
About the actual model type...
Is there a requirement to train from scratch? Because you can 'fine-tune' both CNN and transformer based models. Start with an existing one, then make something like a QLoRA adapter for it.
For vision/multimodal transformers, consider approaches like this unless hardware cost and training time don't matter to you: https://docs.unsloth.ai/basics/vision-fine-tuning
I'd recommend checking out Gemma and Pixtral if you're interested in local models.
Here's an example Colab notebook, for tuning a Llama model for radiography: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb
Example runs.
Anyone has tried to work seriously with all those new fancy 3D model/rig/animation generators? I'm looking for some constructive feedback about them.
Not seriously on my side, notably because I do not really intend to use 3D (so I don't care about quads, for example), I am just using it to generate other angles for 2D assets.
I noticed a nice upgrade in aesthetics these last months. Rodin gives some very pretty results.
Of course, such non conventional humanoids will not work correctly with auto animation (here with Mixamo) but I have no reason to believe that it could not give some correct results for a conventional humanoid.
It still yields some interesting results and I head about some papers that have a lot of potential for optimization.
Interesting, so right side of 1st screen is your 2D asset that you created and it generated the 3D model?
Indeed, it's just one image and all angles look rather good (except it made one eye instead of 2). Well, I had some issues with the hair being too flat or a bit weird, but I can get it, the image has a pretty weird haircut.
I made just 2-3 quick tries before getting this result (and there are like 10 rerolls allowed). I think a better prompt or some luck could have fixed the minor issues. If not, there is always manual corrections anyway.
Impressive, thanks for the feedback!
xahellz thanked saheyus
You are welcome, if you want to test other tools, I heard a bit about Meshy too, it could be worth comparing both. Meshy can also animate.
It's just research paper, but you could find this interesting, about animation : https://www.youtube.com/watch?v=7EA5JM1DI9Y&t=12s
Thank you so much for the answer, I actually want to first make my own model as my professor recommended it, but if it fails with the results, I would fine tune it on open source models
zaidbren thanked teknought
I gathered more images, did transformations on all and now I have a 17k images and annotation dataset ( 68 GB total folder size ) which I want to train using the YOLO large model. My system has 2GB VRAM and when I try to run the training, it logs error as cuda gpu memeory ran out. Is there any way to estimate how much time it would take to train the dataset so that I can get a GPU server in the budget and also which GPU would be suitable for this. Secondly, is it possible to run this on my system itself regardless how much time it would take?
Just use a free Colab notebook.
16GB VRAM on the T4.
2GB VRAM is painful.
Are you trying to actually fine-tune transformer models or something else?
For transformers, you can "maybe" tune smaller text-only models like the Qwen3 class and its 1.7B/0.6B variants on something like a 1060-6GB, using Unsloth.
But even that has issues (mostly related to gimped 16-bit float compute) that require hacks/workarounds.
For multimodal/vision models over 4B, you'd probably never be able to keep it in VRAM, meaning training times increase by orders of magnitude unless your CPU is unusually good. And even if you kept in VRAM, the TFLOPs on a 2GB GPU are probably not going to be decent (I don't know the model of it, but can infer as much just from the fact that it has 2GB VRAM).
But supposing that's not an issue, you said you're using YOLO large?
That model might be much smaller than the transformer stuff...
Maybe it's possible with the right params, but even then, you'd still want to try it on Colab first to make things easier.
You mentioned YOLO.
Are you using this thing then?
Tried this one by any chance?
https://huggingface.co/tencent/Hunyuan3D-2
I am basically using ultralytics and the YOLOV11L model for training
Don't really have to write everything from scratch, because most of the heavy lifting is done by these two
So thats kinda fine tuning to an extent
Problem with google collab is that the data is of 68GB and coping and unzipping in collab takes time which ran out of session and everything restarts
So a couple of hundred megs at most:
https://huggingface.co/Ultralytics/YOLO11/tree/main
Hmm, not sure what framework you're using for them.
And what params to tweak to avoid OOM errors.
Out of curiosity, how are you uploading to Colab?
Are you uploading to GDrive first?
Then mounting?
Or something else?
Uploaded the folder to google drive ( 68GB ) and than mount the google drive to collab
But collab has about 70GB free storage
Hmm, should technically fit then unless you got a lot of other stuff.
How long does the mount take?
Hmmm, but you are right, I am gonna go back to google collab and try to do it there
And is there a reason you have to copy to disk?
You can do in place reads off of the mounted drive, or write to it.
It takes about 20 minutes
I mean the 70GB disk space is in addition to GDrive space right?
is there any way to save session so that if I open a new session it shows that uploaded folders?
from previous
Yes, you save/cache stuff to GDrive directly.
also, If I change the runtime configuration, it resets everything
Only if you save to disk instead of the mounted drive's folder.
IIRC, there is no major speed issue requiring you to copy after mount.
R/W should be about the same.
Maybe a bit slower.
Just read from and write to the mounted dir directly.
Sure, gonna try it again..
Thank you ❤️, you are awesome
Oh yeah...
Is this even game related though? 
😁
if this works, I promise I would go back to making games 
I heard a lot about it but didn't try it yet. I need a new graphic card, which will take a week or so. I will tell you once I try if it's on par with online models
There's also a paid API for a few cents per model, with some demo videos FYI.
So I finally setup everything in Google Collab succesfully, and when I start trainning, I got this error :-
!yolo detect train data=/content/data.yaml model=yolo11n.pt epochs=10 imgsz=640
I tried on both models, large and normal
It says the tensor list is empty for some reason when it shouldn't be?
Need to check full logs to debug.
I'd recommend sharing with someone or clicking the Gemini button in the top right.
You think I need collab PRO?
I fixed the issue, but after 7 hours, it pompts this
Depends, even pro might not last long enough.
Personally I'd just run it on RunPod tbh. Or Genesis Cloud, but their signup ain't working for me in my region.
What are your thoughts on AWS GPU instances?
Last I checked they were more reliable, and pricier.
RunPod and Genesis Cloud were the cheapest I found with some kind of persistent storage option.
GC is a bit more basic, and RP can potentially be cheaper if used right for some use cases.
Etc.
Like 7 bucks a month for ~100GB of NAS you can mount to the GPU instance on RP.
As for the GPUs themselves...
You can buy 1 spot instance of something like the AMD MI300X (192GB VRAM) for under 2 USD per hour in total, charged by the minute. I think it also beats the $40K Nvidia H100 in TFLOPS, so bang for buck is quite good.
Just need to cache progress carefully for interrupts,
A full pod of 6 MI300X GPUs (with 1152GB of combined VRAM, several terabytes of RAM, and hundreds of vCPUs) is about 10 USD per hour, charged by the minute (last I checked, assuming I didn't misread something).
It can actually be cheaper to buy the seemingly more pricey GPUs in some cases, since you're charged by usage time.
The stronger ones finish tasks sooner.
So bang for buck in terms of TFLOPS matters.
For actual cost savings.
Same would apply to AWS.
But I dunno what their exact GPU offerings are.
RunPod (and stuff in the same "tier" like Genesis Cloud, TensorDock, Vast.ai, etc.) is just generally much cheaper in exchange for being less reliable, less integrated with AWS or similar ecosystems, etc.
Even cheaper providers exist, but can skimp on features and reliability even more.
https://aws.amazon.com/ec2/instance-types/g4/
These are Amazon's cheapest GPU offerings I think.
For a 16GB Nvidia: $0.526 on-demand hourly rate.
For 16GB AMD: $0.379 ODH.
For comparison, cheapest reliable option I saw on RP was $0.19/hr spot for a 20GB Nvidia GPU (RTX A4500).
$0.18/hr for another low avail. 16GB GPU (RTX 2000 Ada).
Vast.ai with its price competition might be the absolute cheapest but, IIRC, you're dealing with randoms and persistence is harder.
Wow, Thank you so much for a detail answer.
So as I can see, while training, The GPU Memory resources was used about 2 - 3GB per epoches for the YOLO11 N model for 16 bach and 10 epoches. Do you think For 16GB AMD: $0.379 ODH. from AWS would be enough to pull this off?
zaidbren thanked teknought
Hiya, just dropping by to ask a question on using A.I like Copilot to help with programming. I've dabbled in C#, Unity, Java, as well as Python and im still fairly a noob at GScript. I was thinking about using Copilot as an assistant to help me with the programming part of using Godot Engine since I've only worked with instructions from others on how to code certain things in the languages I have familiarity with.
I'm wondering if anyone else has thought of or has been using A.I in such a way?
I think you can definitely use AI to learn programming! Just ask for the basics, like having the AI comment every line it writes for you so you can actually learn instead of blindly following along. A lot of people will tell you that using AI is wrong, but honestly, just use it - just make sure you're actually learning from it. And for GDScript, if you're working on Godot 4, ask your AI assistant about tween functions - the response might really interest you.
Ooo! I'll make note of that! Thank you! 😄
silenttails thanked onacon
i did post this to the narrative channel but im moving it... this channel is a better match!
sharing something i've been working on the past couple of days. for reference this is for a mainly text based game played through discord that is like fighting fantasy choose-your-own-adventure style, except with deep lore, and a massive open game world (this is an on-off project of about 30 years)
https://www.ssod.org/building-the-languages-of-cryptillia
feedback and thoughts welcome on this process 🙂
the game world, its writing and lore, are a life long project. the images, and side content, ive only really been able to deliver at scale over the past 2 years as generative models sprang up that saved me trying to find people to make thousands of images.
just to make absolutely clear: i do NOT want to just be flamed for the idea of even using Generative AI in a project. Thats not what i post for feedback for.
because some people on other game dev servers have taken to doing that.
hiii guys, i worked on a full AI project this semester and honestly im really proud of it, we managed to make it to our university's projects fair competition, but apparently a huge part of the scoring and who gets to win is how many views and likes the video that was posted on the youtube channel makes
I know this might seem really petty
but i would REALLY appreciate if you drop a like on this video
i'd appreciate it sm if you guys like it, and i would love to have your thoughts about it, i poured my heart and soul into this
that's just mean
then pay for promotion of the video 
this sounds like a broken metric
but you just got a like and a view from me hope it helps
yeah ikr, people get so emotionally heated about ai gen.
it's like "waa waa you didn't pay for your art. you should have commissioned an artist or a full art team to draw all 3000 images"
this isn't a role that was ever available, AI didn't take a job anymore than piracy by those that can't afford your game is a lost sale
I worked it out. 3000 professional images is 7.3 years without a break for a good artist, and a cost of 2.4 million USD.
I'm not made of money especially not that sort of budget lol
basically they just started pulling apart the fact I used AI to make images.
IMO AI generated images don't compete with commissioned artists.
AI competes with stock images.
If you want something unique or something specific, you'd still hire an artist, because AI won't do that
IMO it's not different from using an asset pack.
hmm this i disagree with, I've generated very specific unique stuff that only existed in my head until that point
and the generation was 99% there
like this thing, a terraforming machine gone rogue destroying a world one piece at a time with nobody to stop it except a medieval society.
helps that it's a text based game where the images enhance instead of being everything
it's kinda like using an asset pack yeah but if you're prepared to put the time in you can get very bespoke outputs
👀

What ai do you people use to generate character designs for your games ?
I don't, but I could use Metahuman?
.
does anyone have a working kohya ss google collab to train a lora with their gpus?
ive been trying to make this one work all day but im not exactly sure why hugginface keeps breaking down
Do we have anyone here that knows about Fuzzy Logic and especially Fuzzification and Defuzzification? Would love some help cuz theres something I dont get
Thanks in advance!
Sure, what's up?
mfs really let a dirty fucking clanker do the work for them and act like they're real game devs
you trust them god damn buckets of bolts
soggy synapse sacks are known for their misfires 
both are fallible.
What's the possibilites of this app:-
Businesses can put their social media, websites link and the AI would crawl through the popular content on IG, tiktok, youtube, around their brand, competitors or similar products and find the best marketing video they can make to get more engagement and sales. It would be able to analyse popular videos using GEMINI and find what made them popular / the hook.
Google recently published about Genie 3, their "world model", an attempt to generate interactive worlds with neural networks : https://www.youtube.com/watch?v=PDKhUknuQDg
Although it is not available publicly, it is quite interesting to compare with the presentation of Genie 2, 9 months ago : https://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/
The distortion and blurred parts almost disappeared. I guess they can still occur to a greater extent to what we see in cherrypicked extracts (you can see some on the yellow flowers) but the progress is impressive. Google openly claims to target modelling animation and fiction, although it is not the only goal, the others being robotic training and real world simulations.
More examples available here : https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
<@&133522354419662848>
This looks like AI spam channel, is it? Is that why we are making AI?
Why not? I am very interested in the possibilities of AI and Gaming.
I forgot I made that, I intended it as a joke. X E.
Hi, I'm developing Lumen and just releasing a test phase.
Lumen is a text-based RPG powered by AI, where your character’s story unfolds through choices, attributes, and archetypes. Each chapter comes with custom illustrations, dice-based challenges, and rare items to collect.
We’re looking for adventurers to try it out, help us squash bugs, and shape the future of the game.
🎮Play here: https://lumenrpg.com
I'm working on adding an AI Assistant to unreal engine editor via an editor plugin using python. (Similar to copilot / code editor AI but for blueprints)
still need to create the C++ wrapper to expose the kismet modules but it's open-source if anyone's interested - https://github.com/hamleetski/unreal_ai_assistant
I noticed in the game narrative chat that some people might try to make pixel art with AI tools that are not fit for it and produce blurred images (like ChatGPT). So, I recommend Google gemini flash 2.5 (also called nano banana) if you want to create pixel art assets. The editor is free to use on gemini, although you may have to use an API service for high resolution without watermark.
I was surprised not seeing much comment about this model abilities on this subject.
Of course, don't forget to inform people when you use AI, as it becomes harder and harder to recognize.
<@&133522354419662848>
What is the standard viewpoint of a generic generated pixel art asset? Does gemini like to turn it's assets just a little bit, or is this in your api doc?
I don't get your question, these examples (the 4 last ones) are "RPG boss" samples, so they are in a three quarter view point, which is standard for this specific situation, especially if "profile" is not specified.
Anyway, Gemini could easily rotate these images.
Okay. That was the question. If gemini could rotate the images. Indeed it can
Of course, there will usually be small incoherences. Here I count 10 tentacles (instead of 8 on the original), so it could take a few shots, or a specific prompt or manual work to get a production ready result for complex characters
Right. Too many tentacles! Also, what sort of plan or "prompt" might you use in these complex characters to prepare the color scheme for different environments? Such as a boss battle in the desert/ forest/ faroff planet.
I guess I would provide the boss as an inspiration, to get something like this.
"By drawing some inspiration from the model below, make an underwater HD modern pixel art RPG battle scene. The scene should completely be empty, except for this character, nor UI. The scene should be seen from profile, with underwater ruins, an empty area in the middle and a distinct foreground"
I would ask the character to be integrated to force the pixel scale, then remove it, increase a bite the size of the map (or you just produce small bosses) then add it back as a sprite (you would have to work on the light effects, eventually).
Of course, the pixel scale won't be good. Even if you correct the lighting, add a shadow and some additional effect, it might not blend in perfectly. I might be interesting to try to ask for the background + ennemy, then take off the background, to see if the two fit better together.
Anyway, I don't produce my game in pixel artish style right now, so I did not experiment a lot with it.
what makes enemy ai realistic
I've noticed smash bros ai is generally pretty human-like but not quite
you can't really juke them out like a human unless it's a proper feint
-# no threads 🥀
things that make it realistic:
- combos
- kill confirms in air
- ripostes (swinging back after blocking)
- taunting after kills (rare)
- running away from you when you're invincible
- punishing landings (waiting for you to land, so that you have a few frames of not being able to do anything)
- worse at defense if they were hit recently
things that make it less realistic: - perfectly reading most attacks but seemingly ignoring them sometimes
- whenever they grab you, they always pommel you before throwing you (players sometimes immediately throw you so you have less time to process and respond)
- "giving up" when they can't recover (get back on the stage)
things they could do to be more realistic
- "whiffing" defense (timing is too early)
- knowing how to use mixups on YOU (and falling for yours, sometimes)
- juggling (air comboes that aren't at all guarenteed, but still work)
in general, bots feel like they try to play perfectly but intentionally make mistakes. bots also seem to lack the ability to fall for traps (and often don't set them, since they're unsound)
chess bots do this too. players do things like:
- move a piece into danger, but with a proper followup if it weren't in danger
- try to protect a piece but fail
- win material instead of playing a long, complicated checkmate
- continue trying to fight back in a lost position
what bots do :
- completely ignore threats
- play moves that are "obviously" bad (e.g. a move that damages the position and has no clear objective)
- ALWAYS finding checkmate sacrifices
- play the most boring opening on earth (and usually the same one)
- tend to just go full defense mode when they're losing
overall, ai doesn't take enough risks, and it can't differentiate between things that are easy and hard to spot. how do we program this into bots?
- neural networks
When set up properly, smaller ones tend to be GREAT at predicting patterns (and good at missing the unpredictable). You could probably use a fuzzy logic ai, although tuning would be difficult
Unfortunately, I do not engage with CLANKERS.
This is why I have a second approach:
- logic
(Edit this later)
Thank you for putting something actually interesting in this chat. Looking forward to part 2. One thing ive always found interesting is how the human brain is almost as good as a computer at certain calculations, but only to an approximate result, where a computer can come to a definitive perfect conclusion even faster. For example how a person can accurately estimate a number from a pile of junk and generally get quite close, and how a human can track an incoming ball and be able to calculate the physics fast and accurately enough to respond well, but with plenty of room for error. I feel like if you could force a bot to come to close estimates instead of definitive answers, that could be a step in the right direction. Edit: Turns out thats basically fuzzy logic, interesting 😅
<@&133522354419662848>
This is a nice clean Starting point for any AGENTS.md file
Agents Working Rules
Critical Safety Rules Must be followed every turn and regressions must be proactively reported
- Zeroth Rule: If an edit causes a file to be deleted, corrupted, or severely damaged, immediately stop work, report what happened and what you were doing, and propose restoration options (restore from .bak, rebuild selectively, or request user direction).
- Backup Before Edit: Before modifying any existing file, create a .bak copy (increment suffix as needed). Never skip this step.
Editing & Checks
- Check planning docs and suggest next likely steps after each turn
- After edits, run format, lint, and build checks as appropriate. Never run full applications unless explicitly instructed.
- If checks fail, fix the errors and rerun until clean.
- Git usage: read-only only (status, diff, show). Do not pull, push, reset, or restore from HEAD.
- End-of-turn backups: remove temporary .bak files using Remove-ItemSafely (Recycle module). If unavailable, move backups into a root .bak folder instead of deleting.
- Update planning docs / Checklist files after each turn. Add entries for Ad-hoc edits.
- "_Plan" file is the Source of Truth / Checklist is the current stepwise approach.
Coding Conventions & Design
- Design for modularity and extensibility; keep a clear separation of concerns.
- Include stub placeholders with TODOs for planned future features.
- Comment only where necessary; avoid unnecessary clutter.
- Favor small, focused modules/functions; refactor if a file approaches “god class” size (~3000+ lines).
- Follow project-appropriate formatting/linting; prefer deterministic, reproducible outputs.
LLMs can't become AGI as their language-first world model is not good enough and has limitations to fully understand the real world (and without hallucinations).
AGI is possible however if we were to combine multimodal encoders to a single latent space (that isn't trained language-first), let a model process that latent space and have multimodal decoders at the other end. But AGI comes with the risk that, through training, the AI may become self aware and develop a will, a will that is not aligned with humanity.
So while AGI is still pretty far off because the concept is really really really hard to train with the insurmountable amount of data it requires. Maybe it's better to automate most of the world (human basic needs) with LLMs and MLs where we do have an understanding of its culprits and limits. And leave AGI in very controlled environments until we solved the alignment problem.
(My definition of AGI: an AI that can do all the planning and tasks that a human could do in the real world.)
So I believe the near future will be LLMs and MLs as the main drivers and I might as well jump on that train (making and using tools around these models), as I'm not educated enough to help solving the AGI alignment.
I just committed 1500 lines of code over my phone. AI is just so damn cool. And yes, I'm aware that it's not going to work when I sit down to test it, but still, that's a nice feeling.
Hi, I joined the server long time ago but haven't been active so far. I'm a programmer and a hobbyist gamedev, who's working on a VR/NSFW sandbox game with AI on Godot.
I found that the unofficial Godot server is extremely hostile towards anyone who does anything with AI. So, I left the place and found this channel exists.
Perhaps I'll be hanging around and talk about gamedev/AI stuff from time to time.
My game project is very early in development. For now, I drive an NPC using an AI agent and use various AI-based tools to do things like generate speeches or lipsync animations.
(Edit: I didn't know there are two different Godot servers, and mistakenly mentioned it was the official one. I edited it to fix my mistake.)
Integrating an AI for an NPC is usually applauded by most communities, as it's a human factor that devs otherwise couldn't add
Using AI to generate game assets and code is where communities start batting eyes
I do both, although I'm not using AI to "vibe code" stuff. I drive my NPC with an AI agent, and I'm very interested in creating game assets using AI tools although I'm pretty familiar with non-AI tools like Blender, Substance Painter, or Krita.
As a programmer, I find AI assistants and agents to be invaluable for what I do, which I think to be rapidly becoming a norm in the field I work.
I haven't used AI for asset creation much yet (since my project hasn't reached that stage), but I recently tested creating mocaps from videos using a Sam3D / ComfyUI / Blender pipeline with a good result.
I'm a programmer myself, have been tech lead of several tech labs
Anyway, I don't mind if others don't like using AI for such purposes, as long as they don't blame or accuse me etc. for that.
And I vibe code as well
I compare agents with giving instructions to junior devs
And it works out about the same
I feel it's like having a personal dev team of my own. In my current project, I'm migrating 3-40 legacy Webpack3/CRA microapp projects, which would've taken an enormous amount of effort if it were not for coding agents.
The only problem right now is that there aren't any devs going through the same grow path that senior devs had. So in a decade from now there will be a shortage of experienced senior devs. But hey, maybe that's irrelavant in a decade as AI moves so fast.
Yeah, that coincides with how I feel about it. I think it's mostly a multiplier, so it helps those using it as a team of junior devs whom you can guide/supervise much more than those who use it as an excuse to not to learn coding.
I feel for those who seek junior programming jobs now.
In cities with tech industries there are jobs
Everywhere else, not a chance
It hasn't reached that stage where I live, but I guess it won't be long before it becomes the same.
I've connections with IT directors of big companies and they talk about hiring patterns and how it has changed
In Europe it's sort of okay for cities
America's job market is pretty much on fire
I can't help but fear AI will collapse the economy one way or another, even though I appreciate its usefulness for what I do.
There will be an american led economic crisis in 2026
As in, it's actually already going on
The rest of the world will feel it as well, but not as much
The bubble will surely burst, and even if we somehow survive that, there'll be a bigger problem when AI will start replacing non-IT jobs en mass, once things like robotics or world models get advanced enough to give AI a physical presence in real world.
I mean, if I have to believe my connections in finance, the AI bubble is only a part of the crisis
There is a substantial residue of post-covid hiring season that needs to be laid off
And they're not even talking about a bubble popping, as they predict it won't be a major economic effect
If anything, AI is here to stay and economic crisis is here whether there is a bubble or not, jobs will be replaced/automated for sure, people will adapt but in what way is very unpredictable by nature.
I have a bit crazy/pessimistic 'pet theory' of mine about the prospect.
Usually, a disruptive technological advancement destroy jobs temporarily, but create more in the long run (e.g. photography driving portrait painters out of their jobs and started the whole film industry.)
But the problem with AI is that won't be confied to just this or that sectors and the jobs it may create would likely be taken by a different type of AI. For now, AI doesn't have much physical presence, so things like programming jobs are affected most. But once this limitation gets overcome, it'll mass replace less sophisticated jobs like clerks or cleaners, and they won't start AI startup and get rich after they lose their jobs.
There will be jobs in aiding AI for a while
I just hope I can still be able to enjoy doing my stuff with AI when or if that happens.
I was wondering if I should refactor my current AI agent workflow that drives NPCs to run indefinitely instead of being polled periodically, which is how it works currently.
I brainstormed with ChatGPT (I don't use it except for such QA stuff with a free account - I hate OpenAI) about this idea, and it looks like it's not really feasible:
https://chatgpt.com/share/6930e017-b754-8006-a333-c91d584a1a2b
I suspect we need a realtime agent API (not the realtime API that OpenAI provides, which is for low-latency, speech-to-speech interactions) if we are to use agents that way.
I think I should move on for now. I'll be working on to make a system that maintains game lore and selectively loads relevant parts into the context.
I don't think a typical RAG system backed by a vector DB is the right choice for my usecase.
Perhaps, I should include the TOC of lore content and a few essential entries in the prompt statically, then let the agent to full additional entries as needed.
Hello, Everyone.
I’m a Web3 & AI engineer with 7 years of hands-on experience building in the blockchain and full-stack ecosystem.
My Skills:
🔹 Languages: Solidity, Rust, Go, Python, TypeScript/JavaScript, C++
🔹 Smart Contracts & Blockchain: Solidity, Rust, Move (dApps, NFTs, DeFi, staking, DAO)
🔹 Frameworks & Tools: Hardhat, Truffle, Anchor, Web3.js, Ethers.js
🔹 Backend & APIs: Node.js, NestJS, Express, FastAPI
🔹 Frontend & UI: React, Next.js, React Native, Flutter
As I am experienced Web3 and AI engineer,I’ve been building in the AI and Web3 space for about seven years now , mostly focused on integrating intelligent agents with decentralized infrastructure. Early on, I worked on smart contracts that automated NFT royalties and DAO treasury governance. More recently, I’ve been experimenting with on-chain AI models that make autonomous trading and resource-allocation decisions.
One project I’m proud of was connecting a GPT-based reasoning layer to an Ethereum oracle, letting AI validate real-world data before executing transactions. I’ve also done a lot with decentralized storage + fine-tuned LLMs, where models pull context dynamically from IPFS or Arweave.
Always open to discussions about autonomous agents, ML, and verifiable AI pipelines
I love meeting others pushing boundaries in this space.
I've been working in google antigravity for over a week, and just now realised there is a multi-agent panel o_o
I was planning to work on a RAG system for an AI agent I use in my Godot game.
Turns out I have to work through this weekend for boring my company project, and I'm using even more AI agent for that.
I kinda feel proud of myself... I've been trying to perform a pretty complex project migration with a coding agent, which worked for the most part but still required too much manual intervention.
Today, I decided to just write a specialised coding agent myself, which only took about a couple of hours to prototype. And it seems to work much better for my use case.
To be clear, I'm not boasting about the coding part, which wasn't too difficult, to be honest. Rather, I feel good that I made the right decision in a pinch, to switch from a commercial tool that the company has been paying for with a hackish script that I wrote on the spot, and it actually worked.
What was the agent supposed to be doing that it was failing at?
It was supposed to migrate old frontend project to modern build stack and dependencies. It involves 6 major steps doing things like migrating from Webpack 3 to Vite. The problem is the prompts are over thousand lines long, not counting reference files, which seems to overwhelm the LLM, making it miss a few files or instructions.
I split those 6 steps into even smaller steps, but it would mean I would need to run like 30 or so prompts in sequence. You can't simply tell the agent to read them in order because it doesn't clear its context after executing each prompt.
The problem is that there are over 20 similar projects like that, and I don't have time to do it like that for each of them.
So, I simply made a minimal coding agent with just filesystem and terminal MCP servers, and built a workflow that reads and executes those prompts sequentially.
Claude Code solves this by using 1 agent that takes in the whole file, splits it up in tasks, and gives each task to subagents.
I'm not that trusting in AI yet to convert a whole project though
I want to see it step by step to be sure the right architectural decisions are made and nothing misses
Maybe I should try other dev tools like Claude Code some time. I'm too accustomed to IntelliJ, but that was what I said about Eclipse before switching to IntelliJ.
claude code isn't free tho
I mostly use antigravity, but afaik it's not geared towards completing very large tasks. But I also don't require that.
Antigravity has subagents too tho
If it's a standard feature in more AI-focused IDEs, then maybe I was thinking too high of IntelliJ's agent plugin.
Anyway, I need to get it done in one way or another first and thinking about switching to a different IDE after that. 😅
Released it as it is on Github:
https://github.com/mysticfall/catpilot
I've seen demos of Claude Code doing impressive things
Claude Code can be run as a CLI, doesn't replace your IDE
Good to know. Then I feel far less reluctant to try it out. 🙂
I prefer in IDEs though, so I can see what they're doing xD
I'm currently running 8 instances of it 😅
I'm running 2 claude code agents right now in google antigravity
I don't want 2 agents to overlap over the same code
So can't do too many tasks at once
But it's fair to say, programming everything by hand is not required anymore
It makes mistakes? I fix it. It needs to do something complex? I give an example it can repeat. etc
I still like to code by hand, but now I have a choice about what parts I want to work on myself and what others to delegate to an agent.
Meh, I like to think about code and solve problems
The writing is like, sometimes I just wish I could telepathically write code because typing takes time
And now AI just zooms through that process of writing, all I gotta do is read and correct it
I guess it's a matter of taste. I often experiment with unusual approaches in coding, and I can't learn new programming knowledge if I don't code myself.
Finally, got some free time to work on my game again.
I need to make a lore database for augmenting the context for LLMs, and I'm torn between rolling up my own and adopting the SillyTavern format.
The idea of being able to share a common lore data between my game and SillyTavern is tempting, but I won't be able to use any existing code to utilise it and ST's format has a lot of bells and whistles that would take me quite some time to implement.
My gut feeling is that it'd be better for me to make a custom system in which AI decides what entries should be included based on a TOC & event history.
But it'd incur additional overhead for a rountrip to the server, which I learned to be not boding very well in the context of a realtime game.
Decided to roll my own lore system. The idea is,
- A lorebook is a Wiki.
- The source of the lorebook is a local directory containing Markdown files.
- The TOC of the lorebook coincides with the directory structure.
- The ID and other metadata of each entry is kept in its frontmatter (YAML).
- Each document may define additional sections with predefined headers (e.g. 'See Also') that are ignored in runtime.
- In runtime, a
ILoreBookinstance is built upon each source directory. - You (or an AI agent) may request the TOC of a lorebook.
- You (or an AI agent) may request the contents of selected entries by passing their IDs to a relevant API.
- The returned text will be a valid Markdown document with headings of each selected entry adjusted to represent a correct hierarchy.
I haven't yet decided on what triggering mechanism to use, but hopefully, this could be a base on which I can build.
One thing, any tutorial to star ai develop
Maybe there was no reply since 'ai develop' is rather vague, or too broad a term.
Finished with a simple implementation of the idea, except for the trigger mechanism.
I asked AI to generate some test fixture and it came up with this:
Decided to let the agent query the lore for now:
Hope it wouldn't add too much overhead.
Looks like VibeVoice sounds much more natural than Orpheus.
I switched from XTTS to Orpheus to minimise latency, but wasn't satisfied with the result. VibeVoice seems to be a better alternative for me.
My current AI workflow to drive NPCs is based on a short-lived periodic poll model.
It works ok, but I've been wondering if it wouldn't be better if I refactor it to a coding agent style, long-running model.
I asked ChatGPT for advice, and it was strongly against the idea.
But I don't see any reason why it shouldn't work, if properly implemented. Maybe I should try to do an experiment and see how it works out.
I mean, you cut of the reason in the screenshot
so idk either
It was mostly about latency, as you can infer from 1.
Here's the relevant part, if you feel interested:
HACKER
(why does he not obey the law of wal)
just kiddingggggg it's not my problemmmmmm my friend will probably fix it later
My attempt to refactor the AI agent got blocked by a bug in Language-Ext library.
I was experimenting with abstracting the NPC AI as a functional stream of processing "percepts" and emitting "observations" and "intents".
Probably I'll need to work on different parts of my game until it gets fixed.
The bug was fixed and the library author even added new APIs for my use case. I'm good to go again. I started a big refactor of the codebase, so it'll take sometime before it can be made to build again though.
hey, i'm kimo. i've been working on a tool to help with quick 2d asset generation for jams and prototyping.
i built this mostly to speed up my own workflow, but i'd love to get some feedback from other people using ai in their dev process. you can try it for free. https://www.spritecook.ai/?v=2
I like the first samples. At least you have better taste than most of the creators of AI workflow websites for video game assets I heard of (there are some I wish I could unsee). Which model is behind it?
Hmm... I'm in the middle of refactoring my NPC AI agent from looping an one-shot prompt to an long-running agent, like how coding agents work.
The initial testing looks not very promising, but it's too early to tell if the approach is not feasible.
I made it work with a minimalistic conversation scenario. I'm still unconvinced if it'd be a viable option for more prolonged, interaction-heavy scenarios.
(For a context) To drive an NPC using an LLM:
- AI and the player alternately post a message - suitable for a turn-based dialogue system.
- Periodically run a prompt that triggers actions via tool calls or response - What I did earlier.
- Keep an AI agent running indefinitely, pulling context information via tool calls when needed - What I'm trying to do now.
The "pulling context on-demand" has been the most challenging part for me. I'm not sure if AI will be able to grasp the overall flow of the scene by threading the scattered information together.
In the previous model, I just collected all the scene events programatically, and neatly them in a 'chat history' format in the prompt.
Nice, awesome!
Looks good
Thanks, you can select the model you want to use. I try to add them as soon as they come available. Rn it's Nano Banana Pro and Gpt-Image-1 (Sora)
.kimosabe thanked saheyus
I see why the samples look neat. I felt like the multimodal models were a huge leap forward for pixel art.
Good luck with that, I’m already having trouble setting a proper context formatter for a non-runtime dialog writer (usual issues with too much exposition, low understanding of relationships etc, and thats with models as big as GPT5.2)
Thanks. Yeah, I guess I'll need some luck if I'm to make it work reliably. I had some success with my previous version, but making it work like a coding agent seems to pose a whole different challenge.
Hey AI-Dev folks! 👋
I’ve been experimenting with different AI models as game masters, but I kept running into the issue of exceeding the context window. That meant NPCs would have “bleed” and things would get mixed up over time. I tried this across a few different AIs—I won’t name names, but let’s just say a few well-known ones—and it got me thinking about a more reliable way to keep the world state consistent.
So, drawing on my old database programming days (way back in my twenties!), I decided to write everything back into a relational database. This way, the AI acts as the game master but always pulls each scene’s state from the database like a file cabinet. When you enter the library, the NPCs have the correct memories and relationships for that scene. When you move to a new area, the updated memories get saved back to the database, and the next scene loads accurately.
Basically, I’m using a Python CLI (command-line interface) to grab all the info from the database and feed it back to the AI for each new area, so the AI can run the game smoothly without losing any context. Thanks for letting me share this in more detail!
I'd recommend checking out SillyTavern if you haven't already. Although it has only rudimentary extensions for the long-term memory feature, it has a pretty complex RAG system, such as 'World Info'.
Thx
deafknightjr thanked mysticfall
I managed to make the new Response Client work with OpenRouter/MS Agent Framework. I'm not sure if it's better or how the developer message works when the history grows, though.
I hit an unexpected roadblock - the indefinitely running agent model worked relatively well, but many APIs like chat store or reducer in MS Agent Framework seem to get triggered when a (non-tool) response is received.
Maybe such a model isn't how an agent system is supposed to be used. It's not a big problem, however, since the other refactoring results like using a conversational history instead of an one-shot prompt are still valid.
I'm thinking about instructing the agent to use responses as a way to keep its internal thread of thoughts/plans, similar to reasoning responses but for itself rather than the user.
Previously, I implemented a checklist and memo system for such a purpose as tools, but maybe a short plain-text responses could work just as well.
Hey — just wanted to follow up now that I’ve had some time to experiment more.
You were right that SillyTavern wasn’t an exact fit — I treated it more as a reference point, like you suggested. I spent about 10–12 hours poking at it and it helped clarify the core issue I was running into.
What I kept hitting across different AI GM setups was context window bleed over time — NPC memories drifting, relationships getting mixed, scenes losing continuity. That’s what pushed me back toward something closer to my old database days.
What I’m building now is a Hero System–faithful setup where the AI acts as GM, but all world state lives in a relational database. The AI doesn’t “remember” the world — each scene pulls state from the DB (NPCs, relationships, perception, etc.), runs the adjudication, then writes the results back before the next scene loads.
I’m using a Python CLI to fetch only the scene-relevant data and feed it to the AI, so it can GM cleanly without accumulating drift. It’s kind of like a hard-state version of what you mentioned with RAG for world lore — but with strict persistence and rule enforcement instead of soft recall.
SillyTavern was useful to look at, but it confirmed I’m less in the RP space and more in the simulation + adjudication space. Just wanted to share where that exploration landed. Appreciate the earlier nudge — it helped narrow things down.
Yeah, ST is focused on RP, and an ideal RAG system for RP could be quite different from the one for a simulation type game, as you noted.
Still, the important question could be what information to pull than from where. If the information is simple enough, you get get away with dumping everything into the context. But you'll see problems like memory loss or mixed up relationship like you said, if you put too much info into the context.
ST tries to mitigate this by implemeting a very complex conditions & rules to determine what information to pull from the World Info. It doesn't matter how you store the source - RDBMS, SQLite, VectorDB (it may matter if you thinking about using an embedding), etc. - but how to select the essential informations out of it does.
Another thing to consider could be adopting an agentic architecture. Letting the agent dynamically pull relevant information could be much more efficient than populating a prompt from a RAG store before sending it to LLM.
Also, you can try things like building an agent workflow, if latency isn't too much an issue, to make a 'guardrail' agent can verify the validity of what the GM agent responded, for example.
But probably there's no silver bullet in such a matter, and each game may have different requirements. After all, there haven't been enough number of people using LLMs/AI agents in their games yet to establish best practices of such a usage.
Yeah, that all makes sense — and I agree there’s probably no silver bullet here.
Where I ended up landing is that for my use case, the selection problem is more important than the storage problem, like you said. I’m less worried about whether the source is RDBMS vs vector DB and more about being explicit about what the GM is allowed to see at a given moment.
That’s why I’ve been leaning toward a very explicit “scene snapshot + recent events” model instead of broad RAG pulls. The idea is that the GM agent never has to infer relevance from a large lore store — the engine tells it exactly who’s present, what they know, what just happened, and what rules apply, and everything else is effectively out of scope for that turn.
I’ve thought about agentic approaches too (guardrail / verifier agents in particular), and I may experiment with that later, but right now I’m trying to keep the first version as deterministic as possible so I can tell whether errors are coming from model behavior or state selection. Once that’s solid, adding verification layers feels safer.
Totally agree this space is still early and that different games will want very different tradeoffs. I’m mostly trying to get something that behaves like a by-the-book tabletop GM — slow, constrained, sometimes boring — and see how far that paradigm can be pushed.
Appreciate the thoughts, though. They’ve helped me frame the problem more clearly.
Good luck, and please consider sharing what you find when you make progress. 🙂
By the way, I'm in the middle of refactoring my project which used the “scene snapshot + recent events model" into a "minimal context + agent pull model" at the moment.
The previous model worked well. So I know it can be a valid approach.
Current rudimentary UI to test and debug my Social Engine. It does not have the questions I asked you will have to infer. Working on it. The scene is Location ID 7 in a table which is Rebecca Law's back yard pool area. The images on top are the NPCs present and I play Steve Savage.
That looks promising 🙂
By the way, I stumbled upon this the other day, while browing the SillyTavern subreddit:https://github.com/SpicyMarinara/rpg-companion-sillytavern
Haven't tried it myself but the screenshot looked interesting.
Thanks I will check it out tomorrow evening.
deafknightjr thanked mysticfall
Now that we have a practical way of making an NPC speak naturally, and reactively without scripts, I think what I linked above could be the other half of the puzzle that would give us human-like autonomous NPCs in games.
The link is just a paper so it may need some time before we see a usable implementation, however.
[Coming Soon] We will release the full code, pre-trained models, and the Frankenstein Dataset.
On a side note, I think it might make sense to make a special kind of PC peripherial just for running AI models.
I'm already running a TTS, STT, and lipsync generator locally in my game, so my graphics card might lack VRAM to run yet another AI model.
That's AI structure for My Romance game
I know it's hard
But there's nothing I can do
how do you make this chart?
I first made in rough paper then AI generated
ah that makes what ever is going on after the candidate actions make sense
Yeah
Since it's R&D stuff it should be hard
I should do stuff like this
especially if im going to try and make increasingly more Game Ai-heavy games
Yeah, you should try it If you’re aiming to make more ai heavy games anyway experimenting like this now will help a lot Even rough tests can teach you what works and what doesnt
I need something like that! I had some of those parts as rough ideas in my head, but seeing it in a chart made it much clearer 🙂
My version would be much simpler and some parts wouldn't be applicable in my project. But that was enough to clear up my ideas.
On a side note, I find it interesting your version also starts from a "perception -> observation" loop, which is exactly how I implemented mine.
I thought the abstraction wouldn't be too common, since I suspected modelling sensory data would be too fine-grained for most other games.
I did some experiments about having dedicated planning and decision making stages, by the way, which didn't go well.
They work relatively well in what they were supposed to, but were too slow in a realtime interaction.
So I consolidated everything into a single agent and in the process of testing it to see if it'd be capable of making reasonable decisions.
I was thinking about reviving the planning part as a parallel loop if needed as a compromise.
I can't do any AI stuff until an issue I reported for Language-Ext gets resolved, though.
Looks interesting, especially its voice designing and natural language instruction features.
I am working on a Social Engine and your chart inspired me to compare my flow.
tbf thats chart was basically brain architecture , Then i just modified it to an AI simulator 😛
yeahh, thats basically tradoff i ran into to, i did some test with split decision -> plainning stages and the behaviour was cleaner, but the letency cost made it slower in Real time
then what better worked for me is kinda hybrid like
-> keep a fast reactive loop every tick (utility scoring + pick next action)
-> run a planner at lower frequency / in parallel (every N ticks or on big events)
or -> treat planner output as a proposal that can be cancelled replanned if the world changes
Yeah, I think the split model is the way to go. I simply can't think of any other model that would work in this context.
Hey folks, I am currently working on a NavMesh navigation system. Currently, I am using Recast to generate and query my NavMesh. My game has strategy / building builder elements where the NavMesh needs to be updated (using the re-tile functionality with Recast to handle this at runtime). Because this needs to be somewhat real time I turn down the tile to terrain resolution (aka the sample rate of the base terrain along with the marching cube size of the mapper). This results in a NavMesh that is great on flattish terrain, but adding any hills and the NavMesh pokes through the terrain.
My solution:
- I periodically project the navigating Agent's world position back to the NavMesh. This requires a query of the NavMesh with a specified search bounds.
- I navigate the agents based on the NavMesh, where I need to query agent and target positions from world coordinate to NavMesh coordinate (projection).
The new problem:
- In general this is something like (simplified examples) TerrainPos: (0, 1, 0), SearchExtents (1, 1, 1) - but periodically given the error from terrain to the NavMesh we exceed the search extents (y) coordinate.
Literally typing this all out I think the solution is to not project the agent positions from the NavMesh at all and treat that as the source of truth. Only in Rendering do I project the agent back to the terrain which can be as simple as a heightmap lookup and/or a raycast into structures simplified collider (example walking on a bridge). Welp, too late to turn back now. I might as well ask it as a question, would this be how you solve the problem (use NavMesh position as the source of truth of agent positions and project/raycast onto terrain/detail meshes at render time)?
I should also mention that find nearest position on navmesh using Recast is kind of expensive when I do it per agent per-frame (source+destination). Man that was a lot of words and I think I have my solution. Thanks 
How long until someone tries vibing pathfinding by feeding the graph to an LLM?
I have no plan of doing something like that, but I'm thinking about feeding an LLM a top-view of the navigable are and character positions to let it determine the coordinates to walk to for the character it controls.
Dont these benchmarks show model to model improvement?
Where did the claim that "ai is already better than most humans" come from?
This isnt really supporting your claim
holy crap we have an ai friendly space?
sorta this room was for ai-dev
i mean if you look at the description its meant for using ai for generative diolog etc not for general ai dicussion of coding
there should be a agentic coding room
So my panmathos, an ai controlled universal engine fit here? lol
yeah
look at where humans typically rank when triying to do the same tests
do you trust a review from pro gemini model, or pro chat gpt 5.3 inspecting code and determining functionality without context? say, would you believe a review of a project folder to be factual, (function-structure-capability?)
have you tried claude code yet?
i ask because you seem to be all-in on codex 
i tired it a little but couldn't really do much w/o paying for the full plan and i already got openai
i didnt see it too be much better..
I think Codex is much better at staying precisely aligned with my style
I give it detailed plan docs and it loves it.
i personally use deepseek whenever im doing sanity checking because it's insanely cheap
I want panmathos on that list some day..
Whats funny is when i started panmathos (was dataforge), allot of what i "envisioned" i was told was impossible lol.. and i did it anyway.
I'm glad to see active discussions on this channel. I wish I could talk more on stuff I do with AI than on AI itself (e.g. ethics), but probably it'll take a month or so before I can revisit the AI part of my game project.
On a side note, I've been rather slow to adopt code agents, although I've long been using an AI assistant that shipped with the IDE.
In a month or so, I'll start a pilot project at work which I'll use to evaluate the best agent option for the team and establish related conventions.
For now, I'm leaning towards IntelliJ ACP + OpenCode + OpenRouter, for maximum flexibility & integration.
My focus will be creating a setup (e.g. agent policies / instructions) that'd allow the team to reliably run agents to implement new features and fix bugs.
Panmathos will allow me to have "smart npcs", ones that can remember and dynamically change over time. My mmo is designed to be an evolving world with an environmental pushback, i needed a complex list of capability's, way more than i alone can easily create and work with, so panmathos IS my team lol, it my own solution to generativ ai flaws. My goal is i design the astetics, anatomy, environment, world atmosphere, just once, and the rest of the world is generated by my ai in my image, although in this process i have discovered a few things, found additional "good use" panmathos could be for me, now its something special lol.
@ashen plaza Do you have a Github repo for Panmathos? It sounds interesting. 🙂
No i have not made it public in any way, it owns a hard drive on my pc and i have 30+ previous builds sitting on my primary hard drive lol, been allot of walls and new ideas along the way. i like to solve problems i encounter, not skirt arround them.
leyoki will be, thats the game side of things, and if panmathos really is everything i hope, ill package and build modules for other things, book writing, record keeping,ect.
so far ive been met with mostly negative reactions about my project. so i never ended up working with anyone, hence another reason to build my own team lol.
That sounds ambioutius project, don’t burn yourself out ship it step by step
when i feel the burn, i switch to a smaller game or smaller project for a bit, usually that project involves a small element of my bigger design, so i can take a moment to test a smaller system.