What is Crawling?
Google’s Spider/Crawlers (A kind of Software Google uses like Googlebot ) visit web pages to find the information. These crawlers will crawl all webpages and follow links on those pages. They go to each follow link like we do in order to gather the information and they bring data to Google’s server about those webpages.
Crawling begins with the past crawls and the sitemap given by web masters or website owners in the past. That is why we need to submit the site maps.
These Google Crawlers pay more attention to new sites, new web pages, changes in existing sites, follow links and dead links.
However we can stop crawlers from crawling some web content and pages by modifying Robots.txt file.
What is Indexing?
It is the Basic information you should have while start learning SEO or Digital Marketing to get an idea how should you think while learning or adding meta tags to any website.
The advanced knowledge about Google Crawling and Indexing and Robots.txt file will be shared in one of my upcoming blogs.
My next blog post will be How to Learn SEO in 2017?
It will be a post guiding you about best practices in order to learn white hat seo in 2017.