3. Site availability
Since Bing relates users to your site to read through the documents, your websites should be open to both users and crawlers all the time. The search robots will see your webpages sporadically so that you can choose up the updates, along with to ensure your URLs remain available. In the event that search robots aren’t able to fetch your websites, e.g., due to server mistakes, misconfiguration, or an overly sluggish reaction from your own web site, then some or all your articles could drop away from Bing and Bing Scholar.
- Use HTTP 5xx codes to point errors that are temporary is retried quickly, such as for instance short-term shortage of backend capability.
- Use HTTP 4xx codes to point permanent mistakes that really should not be retried for quite a while, such as for example file perhaps perhaps not discovered.
- If you want to go your documents to brand brand new URLs, create HTTP 301 redirects through the location that is old of article to its brand brand new location. Do not redirect article URLs to your homepage – users need certainly to see at the very least the abstract once they click in your URL in Google results.
4. Robots exclusion protocol
In the event your site runs on the robots.txt file, e.g., www.example.com/robots.txt, then it should never block Bing’s search robots from accessing your documents or your URLs that are browse. Weiterlesen