tag in the head of the HTML document. By default $MetaRobots is set so that robots do not index pages in the Site and PmWiki groups. The $RobotPattern variable is used to determine if the user agent accessing the site is a robot, and $IsRobotAgent is set accordingly. By default this pattern identifies Googlebot, Yahoo! Slurp, msnbot, BecomeBot, and HTTrack as robots. If the agent is deemed a robot, then the $RobotActions array is checked to see if robots are allowed to perform the given action, and if not the robot is immediately sent an HTTP 403 Forbidden response. If $EnableRobotCloakActions is set, then a pattern is added to $FmtP to hide any "?action=" url parameters in page urls generated by PmWiki for actions that robots aren't allowed to access. This can greatly reduce the load on the server by not providing the robot with links to pages that it will be forbidden to index anyway. */ ## $MetaRobots provides the value for the tag. SDV($MetaRobots, ($action!='browse' || preg_match('#^PmWiki[./](?!PmWiki$)|^Site[./]#', $pagename)) ? 'noindex,nofollow' : 'index,follow'); if ($MetaRobots) $HTMLHeaderFmt['robots'] = " \n"; ## $RobotPattern is used to identify robots. SDV($RobotPattern,'Googlebot|Slurp|msnbot|Teoma|ia_archiver|BecomeBot|HTTrack|MJ12bot'); SDV($IsRobotAgent, $RobotPattern && preg_match("!$RobotPattern!", @$_SERVER['HTTP_USER_AGENT'])); if (!$IsRobotAgent) return; ## $RobotActions indicates which actions a robot is allowed to perform. SDVA($RobotActions, array('browse' => 1, 'rss' => 1, 'dc' => 1)); if (!@$RobotActions[$action]) { header("HTTP/1.1 403 Forbidden"); print("