I would not be so sanguine about machine sentience. Right now software is structurally unsuited for it; software executes in response to some event, be it a user operation or a signal of some kind, runs whatever handler is triggered by the event, executes in a deterministic fashion (or, maddeningly, unpredicably), and finishes.
If you have not heard of John Horton Conway and the Life game, you should look it up. Even the most entry-level programmer can implement it (I wrote a version on a TRS-80 with Cassette-BASIC), and it does seriously Wonderful Things. It's a grid, with a clock, and on each tick the state of one cell in the grid changes from off to on or vice versa depending on the number of cells around it that are on.
It's an example of cellular automata.
The point is that complex behaviors emerge from a supremely simple set of rules. Consciousness is a complexity that emerges from the firing states of neurons, which have rules more complex than a flat grid but ... well, you get the picture.
Machines do learn. Chess-playing machines can, yes, calculate vast permutation sets but they also remember mistakes, self-modifying memory; if we insist on looking at everything as bits and data it's easy to say it can never exhibit Complexity ....
.... but then, we don't experience synapses, either.
Be. Very. Afraid.
It will probably begin with simulated consciousness, which means machines that are never idle, always running, their processes distributes across vast numbers of ... cells.