The new gTLDs that are being implemented have a few security concerns already. One of the major concerns is Name Collision, which results from a single domain name being used in different places.
An example of this would be a company that uses .corp in an internal domain name. Under the new gTLD processes, the .corp gTLD could be bought by a different company for their use on the internet. If that happens, when a user tries to go to internal locations on a company network using .corp, there is a chance that they could actually get data back from the now legitimate .corp servers on the Internet.
Using an internal domain name like this is a very common practice among businesses, so any issues that may come up dealing with .corp could be widespread. In the case of these new gTLD’s, the owners of those servers could also manipulate their records, redirecting wayward queries. This opens the door to possible malware or phishing attacks on unsuspecting systems.
However, it’s unlikely companies in this new wave of gTLD registrations will do such a thing. This is an unprecedented change to the Domain Name System that will be monitored under a microscope. With the gTLD distribution of servers growing like this, it now means a hacker would have more servers to attempt to compromise. The root servers and current TLD servers have been very secure and reliable so far. But the new registrants of these gTLDs may have flaws or poor security practices, making it easier for someone to gain access and cause problems.
This isn’t just an issue with anyone using .corp, either. It could affect anyone using internal networks. Why? An internal name you’re using now could one day be registered as a gTLD and cause name collisions for you. Interisle Consulting Group performed a test over 48 hours on the root servers to monitor inbound traffic. Of the total traffic checked, 3% included TLDs that were not registered but soon will be (.corp, .home, .site, .global, etc). And a whopping 19% of traffic was for unregistered gTLDs that ‘could’ be registered in the future (syntactically correct).
Many vendor defaults have systems set to use these (currently unregistered) gTLDs, which is why there is so much of this traffic hitting the root servers. You may not be trying to go to “website.home” in your browser, but that doesn’t mean you don’t have some software or hardware that isn’t trying that in the background. So any gTLD’s that get registered could unknowingly cause some name collisions for certain software and hardware vendors.
ICANN is working on mitigation techniques to try and avoid problems like this. Right now, some of the gTLD requests have a hold placed on them until more investigating can be done (like .corp and .home). There is a good chance some of those more common ones may not be allowed if there is a potential they will cause real problems for the Internet.
Another concern is the root server network. There are currently 13 logical root servers in the Domain Name System, with 377 physical server sites using anycast as of this writing. With the new gTLDs being implemented (up to 1000 a year), this will slowly increase the load on the root servers.
Map of Root Servers
By some projections, the increased traffic will be negligible and not a problem to manage. The main concern with the root servers is the provisioning involved. The current operation and maintenance of the Root servers is a very solid system. Changes on the root DNS servers only happen at the rate of about one a year for each gTLD.
The provisioning and modification of the new gTLDs will greatly increase the workload and maintenance of the 13 root servers. As with many things, the more something is changed, the more likely it is to break. Fortunately the Domain Name System is built with redundancy. If there are any failures, these redundant systems should be able to handle it. In the case of incorrect data, though, there is the potential for large issues. We just have to hope that with the close attention to detail, these problems will not be something to worry about.
Many oppose the new gTLD rollout as well, but one of the more prominent voices against it is Verisign. Verisign is widely known for its certificate services, but its core business is running the .com and .net gTLD servers (and a few others). Verisign is concerned about the name collision issues as well as implementation problems. The company believes these new gTLD’s may cause a bigger problem than the experts think, potentially affecting many companies and individuals on the Internet. That’s why they are recommending more investigation and testing before allowing the changes to go live. ICANN has set a limitation of up to 1000 new gTLD’s per year and they believe that should be a slow enough rate as to not overburden anyone during the provisioning processes for each new gTLD.
The first gTLDs are expected to hit the Internet around November this year as part of the phased rollout. For the most part, it will be sort of a cosmetic change in DNS and we don’t expect problems. Experience teaches, however, that technology doesn’t always conform to what we expect.