<?xml version="1.0" encoding="utf-8" ?>
<?xml-stylesheet type="text/xsl" href="RSS_xslt_style.asp" version="1.0" ?>
<rss version="2.0" xmlns:WebWizForums="https://syndication.webwiz.net/rss_namespace/">
 <channel>
  <title>ProductCart Shopping Cart Software Forums : Https:// being indexed</title>
  <link>https://forum.productcart.com/</link>
  <description><![CDATA[This is an XML content feed of; ProductCart Shopping Cart Software Forums : Search Engine Optimization : Https:// being indexed]]></description>
  <copyright>Copyright (c) 2006-2013 Web Wiz Forums - All Rights Reserved.</copyright>
  <pubDate>Fri, 10 Apr 2026 19:43:12 +0000</pubDate>
  <lastBuildDate>Tue, 12 Sep 2006 06:58:01 +0000</lastBuildDate>
  <docs>http://blogs.law.harvard.edu/tech/rss</docs>
  <generator>Web Wiz Forums 12.04</generator>
  <ttl>360</ttl>
  <WebWizForums:feedURL>https://forum.productcart.com/RSS_post_feed.asp?TID=238</WebWizForums:feedURL>
  
  <item>
   <title><![CDATA[Https:// being indexed : JohnI see the same thing. I have...]]></title>
   <link>https://forum.productcart.com/https-being-indexed_topic238_post1336.html#1336</link>
   <description>
    <![CDATA[<strong>Author:</strong> <a href="https://forum.productcart.com/member_profile.asp?PF=87">geoff</a><br /><strong>Subject:</strong> 238<br /><strong>Posted:</strong> 12-September-2006 at 6:58am<br /><br />John<br>I see the same thing. I have implemented using rel="nofollow" on all links that trigger https. I am no expert but it appears once Google has indexed a page and the page is still valid - the page is not readily dropped.<br><br>On researching this I came across two other good ideas. First was to ensure all links are the full address i.e. http://www.myhealthmyworld.com/coop/pc/mainindex.asp and not relative address. The second was to impliment you login as a subdomain aka http://www.secure.myhealthmyworld.com as this would enable you to have a new robot.txt that disallows all robots from within the subdomain.<br><br>Even with a valid set of exclusions within robot.txt - I still see Google indexing certain excluded pages but&nbsp; only infrequently. <br><br>Hope this helps someone.<br>]]>
   </description>
   <pubDate>Tue, 12 Sep 2006 06:58:01 +0000</pubDate>
   <guid isPermaLink="true">https://forum.productcart.com/https-being-indexed_topic238_post1336.html#1336</guid>
  </item> 
  <item>
   <title><![CDATA[Https:// being indexed : Been doing some research on why...]]></title>
   <link>https://forum.productcart.com/https-being-indexed_topic238_post652.html#652</link>
   <description>
    <![CDATA[<strong>Author:</strong> <a href="https://forum.productcart.com/member_profile.asp?PF=74">watercrazed</a><br /><strong>Subject:</strong> 238<br /><strong>Posted:</strong> 07-June-2006 at 5:49pm<br /><br />Been doing some research on why I show so many pages (way more than I have) indexed in google. One thing I just found was that for some searches a https:// version of the page shows up before a http:// version.&nbsp; The https:// should not be indexed at all. no links or anything to my secured checkout page except through the login page and they are disallowed in robots.txt as well. But my static html pages and other non-cart pages are showing up as https:// The only thing I can figure out is that once someone enters the secured page and then leaves to the main cart area they are still remaining in the https:// version.&nbsp; <br><br>Any suggestions are welcome.<br>]]>
   </description>
   <pubDate>Wed, 07 Jun 2006 17:49:14 +0000</pubDate>
   <guid isPermaLink="true">https://forum.productcart.com/https-being-indexed_topic238_post652.html#652</guid>
  </item> 
 </channel>
</rss>