If you are around security circles then there is probably a debate that you have heard over and over again. This is a debate on how to expose a security problem that might be included in a popular piece of software. Even if the software is not popular, there can still be an issue as to how you disclose the vulnerability.
When you are dealing with software there is always a good chance that you are going to find something wrong with it. It doesn’t matter how well it has been tested, there is going to be something wrong. Most of the time, the problem is not going to be something that is considered harmful. Most of the time, it is only going to be a bug that causes the software not to work. But there is a thin line between a glitch and a security problem. As a matter of fact that is how most security problems in software are found. The people examine the programs, be they a white hat or black hat hacker, comb through it using a fuzzer or similar software and they wait until they find a glitch. Once they have found that glitch they go in to see whether it can be exploited. There are usually several different signs on whether a glitch can be exploited or not. Once they have found one of the signs that the software can be exploited, it is now considered to be a vulnerability.
At this point, the vulnerability is usually reported to the proper companies as a problem. But some security researchers take it the extra mile. They want to see if it can truly be exploited and if it is a dangerous exploit at that. So they will create an exploit for the vulnerability which can take days of grueling work. Sometimes it can take even months. Once this happens, they can then prove the security problem is serious and that it needs to be handled the right away.
As we mentioned earlier in the article, there are several ways that you can go about it. There is a big controversy over which way is the right way but no matter which way you go, just make sure that the public is safe in the long run.
Some people will tell the company and give them time to fix the problem before they release the information to the public. This sounds like the right thing to do and most of the time it is. But there are downsides to this approach. You give the company less emphasis to fix the software. When they know that you are not releasing the information to the public until they are finished you are now on their timetable. But this also means that you give someone else time to find the hole and come up with an exploit at their home. That is the situation that people on the other side of the equation fear.
These people do the opposite. As soon as they find the vulnerability they release the information to the public. This forces the company to have to fix the problem right away and not just sit on it. A lot of times you get faster results this way. The downside is that you give the bad guys the information as well. And if anyone does not get the update in time, then they are in trouble.
So as you can see, there are both upsides and downsides to both solutions. You are going to have to go with your instincts on which way you think is right or not.