On OpenAI Shutting Down Projects

OpenAI recently shut down one of their clients’ chatbot projects and this was discussed on Hacker News with plenty of arguments on both sides. I’m of the opinion that we should default to be on the side of OpenAI for this one.

Technology is an accelerating process. As we get better with resesarch and invention, our methods for producing research and invention themselves get better.

The last time we were at a place where technology explosively and wantonly developed, we ended up with nuclear weapons and threatened the very existence of humanity. With AI, we’re at a place where we can actually anticipate potential disaster ahead. Call it science fiction if you will, but the truth is that science fiction is what technology aspires to be.

I was an ML engineer, and I don’t think we’re at the point where danger is imminent yet, but perhaps it’s best we don’t stray near that area at all. Remember that in some systems we want to minimize false positives, but in this case, OpenAI wants to minimize false negatives: We would like to not let that potential dangerous AI usage slip by.

Thus, my default stance is that it’s understandable and okay for OpenAI to liberally pursue potential breaches in their AI usage policies, so long as we also have some method for correcting these false positives going forwards.


← Back to home