in "A Standard for Robot Exclusion", at http://www.robotstxt.org/wc/norobots.htmls Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts.
5 lines
262 B
Text
5 lines
262 B
Text
$NetBSD: distinfo,v 1.1.1.1 2011/07/10 12:47:38 spz Exp $
|
|
|
|
SHA1 (WWW-RobotRules-6.01.tar.gz) = 426920bbfc73a38dffa319dd2f53b0eb9b294b5b
|
|
RMD160 (WWW-RobotRules-6.01.tar.gz) = 6f2c1bef375ad2b2f171b4feae721eec8e1007ec
|
|
Size (WWW-RobotRules-6.01.tar.gz) = 9047 bytes
|