The problem stemmed from the fact that the system was trained on data submitted by people over a 10-year period, most of which came from men.
The AI was tweaked in an attempt to fix the bias.
However, last year, Amazon lost faith in its ability to be neutral and abandoned the project.
Amazon recruiters are believed to have used the system to look at the recommendations when hiring, but did not rely on the rankings.
Currently, women make up 40 per cent of Amazon's workforce.
Stevie Buckley, the co-founder of UK job website Honest Work, which is used by companies such as Snapchat to recruit for technology roles, said: "The basic premise of expecting a machine to identify strong job applicants based on historic hiring practices at your company is a sure-fire method to rapidly scale inherent bias and discriminatory recruitment practices."
The danger of inherent bias in the use of algorithms is a common problem in the technology industry.
Algorithms are not told to be biased, but can become unfair through the data they use.
Jessica Rose, a technical manager at education start-up FutureLearn and technology speaker, said: "The value of AI as it's used in recruitment is limited by human bias. Developers and AI specialists carry the same biases as talent professionals, but we're often not asked to interrogate or test for these during the development process."
Last month, IBM launched a tool that is designed to detect bias in AI.
The Fairness 360 Kit allows developers to see clearly how their algorithms work and which pieces of data are used.
"Considering Amazon's exhaustive resources and talented team of engineers," Mr Buckley said, "the fact their AI recruiting tool failed miserably suggests we should maintain a default scepticism towards any organisation that claims to have produced an effective AI tool for recruitment."
Amazon declined to comment.