This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author ezio.melotti
Recipients brett.cannon, ezio.melotti, zach.ware
Date 2013-03-10.20:24:20
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1362947060.8.0.241866993652.issue17066@psf.upfronthosting.co.za>
In-reply-to
Content
Do you think doing something like:

class BaseRobotTest:
    def setUp(self):
        lines = io.StringIO(robots_txt).readlines()
        self.parser = urllib.robotparser.RobotFileParser()
        parser.parse(lines)

    def test_good(self):
         for url in good:
             self.assertTrue(self.parser.can_fetch(...))

    def test_bad(self):
         for url in bad:
             self.assertFalse(self.parser.can_fetch(...))

class RobotTestX(BaseRobotTest, unittest.TestCase):
    doc = "..."
    good = [...]
    bad = [...]

...

would be a better approach?

On one hand is a bit more verbose and doesn't create a separate test for each URL (I don't think that's important though), but on the other hand it gets rid of lot of magic and makes the test more understandable.
History
Date User Action Args
2013-03-10 20:24:20ezio.melottisetrecipients: + ezio.melotti, brett.cannon, zach.ware
2013-03-10 20:24:20ezio.melottisetmessageid: <1362947060.8.0.241866993652.issue17066@psf.upfronthosting.co.za>
2013-03-10 20:24:20ezio.melottilinkissue17066 messages
2013-03-10 20:24:20ezio.melotticreate