The future of the internet is not easy to predict. It is easy enough however to take a look at where it has been and then see where it is and then draw a line that projects somewhere into the future.
So where did it begin, what was the beginning like? Well it was static html pages, very few of them. At some point someone thought it would be a good idea to make a directory (like a phone book) of all the sites. The search engine was born from this concept. That’s the beginning.
Now where are we now? Well social media, web 2.0, interactive web based applications, and millions are of people are online daily.
So then we can deduce that the web was stagnant and now it is interactive. So my guess would be that it will continue to be more interactive, and more an more people will use it. Any specific guesses would just be guesses, but I’ll have a go at them anyway. I think the web will take over the operating system. I think the web will be the operating system. I think Google and Apple will replace micro soft. I think Facebook and Twitter will continue to rule the roost when it comes to social interaction, while Google will continue to answer our questions. That’s what i think.
What do you think? We would love to hear your thoughts on this topic.
This may be somewhat controversial I believe there is a bit of truth in this idea. I’ve had countless sites that ranked for near nothing until I’ve purchased adwords ads.
The rankings are not gifts from Google for buying ads through them, but rather results of manually inspected site relevancy
If you you purchase ads for “blue widgets” then your ad will be “pending review” after the ad. Or keyword is manually reviewed a quality score is given. It is in my opinion that this score is then somehow used in the search algorithm.
Google would be dumb not to use this data, after they have spent time reviewing a website why not use it. A manually reviewed site must be a more accurate measure of a sites “keyword relevancy” than using backlinks and keyword density right?
If you own many websites and those websites link to each other in an attempt to boost search engine rankings then being on a shared hosting server may actually benefit your cause.
If you have 3 domains on a dedicated server and they are linking to each other, they all have the same ip, and while this is not bad, it is also not good.
If you have 2 domains on a shared server and 1 on a dedicated server and they all link to each other then there is some benefit going on here.
A shared server with a few hundred or thousand websites could have many sites linking to each other, some of those may be owners linking to their own sites, and some of those may be others linking to other sites that just happen to be on the same ip address. So It is difficult for Google, or Yahoo, or Bing to penalize/link dampen those persons accurately. This assumes that you believe that search engines penalize sites for linking to their other sites. (i don’t think they do, at least anymore)
We’ve had a few issues with load times in the past, in response we’ve changed a few things. We had removed the tiwtter module that searched for site data for the domain in question and displayed it on the page. We’ve made a few changes to that and it’s back up. StumbleUpon networking check has been removed.
2 Cool New Features
IP 2 Location for IP – we’ve now integrated with Google Maps API as well as an ip to location database to provide you with more details about the domains hosting server. (example) Details include city, state, country flag, lat and lon, as well as location on map.
Display other domains on IP – this cool new feature allows you to see all known domains on an ip address. Great for figuring out your competitors portfolio. (example)
If you guys see any bugs please tell us about them. 🙂
A proxy site is a site that allows users to view sites that would otherwise be blocked by firewalls and/or hide their ip address and internet usage.
How does it work?
Well there are many versions but the one I’m most familiar with is the PHP Proxy. A user requests a page or base url. The php on the server uses something like fopen or curl to open the page (file) and get the contents of the page. The content is then displayed on the requesting domain as a page for the user to view. So to the network admin or anyone checking the computers internet access log it just looks like the user visited the proxy site. Continue reading …