I know less about other varieties of commercial AI.
The training process for all machine learning style systems is basically the same: feed large piles of data through a series of mathematical filters, organized along the conceit of "neurons" (but really this is not much different than any other chain of functions in other conceptual architectures).
How the algorithms that apply the models (read: big pile of numbers) work, I know less about for lack of having messed with them or read up on them. I know a little bit about the ones used in image generation, but less for other applications like computer vision or medical diagnosis.
There are of course tons of other AI architectures; most video games use rules-based systems for example. And there are tons of generative algorithms that don't rely on machine learning models, but some other neat math tricks that often involve an assortment of random numbers either generated or arrived at through trial and error. The famous perlin noise and simplex algorithms used for terrain generation in games and other kinds of image and simulation tasks are basically math applied to a pile of magic numbers that produce the results they do for somewhat mysterious reasons.
Anyway, key point is there's no reason to believe that this has much to do with thinking in any real sense, any more than we should assume the universe runs on perlin noise just because we can make shapes that look remarkably like mountains and valleys with it.