Facebook spokesman Joe Osborne issued the following statement: "This targeting option has been removed, and we've taken down these ads. It's against our advertising principles and never should have been in our system to begin with. We deeply apologise for this error."
Mr Osborne added that the "white genocide conspiracy theory" category had been "generated through a mix of automated and human reviews, but any newly added interests are ultimately approved by people.
"We are ultimately responsible for the segments we make available in our systems."
He also confirmed that the ad category had been used by marketers, but cited only "reasonable" ad buys targeting "white genocide" enthusiasts, such as news coverage.
However, The Intercept says that a basic search of Facebook groups still retrieves "tens of thousands" of users interested in "white genocide" and "hate-based content" who find each other through the platform.
In an ironic twist, shortly after apologising for the "white genocide" ads, Facebook had to apologise for blocking an anti-abortion ad endorsing Republican candidate for Senate Marsha Blackburn — after the social media platform was accused of bias and censoring free political speech.
It comes as Facebook and other social media platforms have been fighting online misinformation and hate speech for two years.
With the US midterm elections just a few days away, there are signs that they're making some headway, although they're still a very long way from winning the war against misinformation.
Some even argue that the social networks are easy to flood with disinformation by design — an unintended consequence of their eagerness to cater to advertisers by categorising the interests of their users.
Caught embarrassingly off-guard after they were played by Russian agents meddling with the 2016 US elections, the technology giants have thrown millions of dollars, tens of thousands of people and what they say are their best technical efforts into fighting fake news, propaganda and hate that has proliferated on their digital platforms.
Facebook, in particular, has pulled a major reversal since late 2016, when CEO Mark Zuckerberg infamously dismissed the idea that fake news on his service could have swayed the election as "pretty crazy".
But fake news remains huge and may be spreading to new audiences. A team led by Philip Howard, the lead researcher on Oxford's Computational Propaganda effort, looked at stories shared on Twitter during the last 10 days of September 2018 and found that what it called "junk news" accounted for a full quarter of all links shared during that time — greater than the number of professional news stories shared during that time.
Stamping down misinformation, of course, is anything but easy. Adversaries are always finding new ways around restrictions.
It can also be hard to distinguish misinformation and propaganda from legitimate news, especially when world leaders such as President Donald Trump are regularly disseminating falsehoods on social media.
Some critics charge that the very advertising-based business model that made Zuckerberg rich is also perfectly suited for propagandists.
Services like Facebook and Twitter "sustain themselves by finding like-minded groups and selling information about their behaviour," Dipayan Ghosh, a former privacy policy expert at Facebook and Ben Scott, senior adviser at New America, wrote in a Time Magazine op-ed earlier this year.
"Disinformation propagators sustain themselves by manipulating the behaviour of like-minded groups."