Algorithms aren't left-biased, but it may seem like that for social reasons.
Firstly, the left are much keener on controlling speech than the right (at least these days). So takedowns are going to often be driven by a leftist agenda simply because they're the ones demanding takedowns.
Secondly and relatedly, the AI driven takedowns are trained on human decisions, primarily I guess user flags. If one side of the political debate systematically responds to anything they disagree with by flagging it and one doesn't, the AI will inevitably learn to associate user flags with right wing content. And its job is to predict what content will be flagged, so it turns into a conservative detector. But that's not the fault of the AI. Similar dynamics can be seen on many online forums, where any post with a conservative theme will be downvoted like crazy even if it's polite and well written.
In turn that's not entirely the fault of the users. It's mostly our fault (programmers and product designers). We give users very vague tools with which to express views on content. Is the like/dislike button meant to express a values-neutral judgement of quality, or is it meant to just track who agrees or disagrees? On Reddit are you meant to upvote posts that are good even if you disagree, or are you meant to downvote wrongthink? The platform doesn't say! So it defaults to your base political values, which for the libertarian/conservative right is often "I may disagree but will defend your right to say it" and on the left is "Bad ideas spread, I'll vote them down to reduce their visibility".