"After a second or two he notices ... terminates the removal, but it's too late. Of around 300GB only about 4.5GB is left," the company wrote in a blog.
Following the incident GitLab took the site down for emergency maintenance, keeping all of its customers informed on social media, which saw the company praised for its transparency
While noticing the error quickly, the start-up was unable to fully restore all of the data.
"Out of five backup/replication techniques deployed, none are working reliably or set up in the first place. We ended up restoring a six hours old backup," the company wrote.
Thankfully, the database in question only contained comments and bug reports, meaning it wasn't home to anyone's code that would have been lost from the missing six-hour window.