This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author srid
Recipients alexis, dabrahams, eric.araujo, josip, srid, tarek, techtonik
Date 2011-06-07.17:03:43
SpamBayes Score 2.7250424e-13
Marked as misclassified No
Message-id <A978CFBC-5308-4F4A-BF79-6F780D2A8D4F@activestate.com>
In-reply-to <1307465338.65.0.0419680452556.issue8927@psf.upfronthosting.co.za>
Content
On 2011-06-07, at 9:48 AM, Éric Araujo wrote:

> 
> Éric Araujo <merwok@netwok.org> added the comment:
> 
>> The only way to fix this is to /not/ install *any* packages prior to
>> resolving *all* dependencies
> 
> packaging.install rolls back in case of error, so the system can’t be left in a half-installed state.  p7g.install is only as smart as p7g.depgraph, however.

Well, if the same behavior is adopted for dependency conflicts (eg: see issue description) as well, it would necessitate rolling back by uninstalling the previous N packages, then installing these N packages (again) by traversing a different path (and repeat for other conflicts), would it not?

>> which means that there needs to be a way to resolve the entire
>> dependency graph for any given package in PyPI.
> 
> PyPI exposes requires, obsoletes and provides for releases that upload PEP 345 metadata; client code using p7g.pypi and p7g.depgraph can then build a dependency graph.

Not all packages upload their release sources (thus metadata) to PyPI, which is why - I believe - PIP is scraping the Simple Index and home_page urls to get the appropriate sdist for any package. I am not fully aware of what kind of packages p7g.install is supposed to support, though. I assume that setuptools-style projects (using install_requires) are not supported by p7g.install.

>> the PyPM repository provides a sqlite db containing dependency
>> information for all packages and their versions.
> 
> This experiment with a local copy of the full repo graph is interesting.  Do you have blog posts or something talking about synchronization issues, dealing with multiple repositories, using SQL vs. something less ugly <wink>, etc.?

The local index cache is automatically updated not more than once a day. Multiple repositories are searched in the configured order (linearly). SQL is just a data format, the remote index can be of any format (xml, json, pickle, ..) as long as the client can easily query the dependency chain.

But its probably much simpler to only expose per-package dependency (and other metadata) through REST urls (at the cost of network delays, however). No index is required. Eg:

    http://pypi.python.org/metadata/scipy/0.9.0/DIST-INFO -> requires, obsoletes, etc...

(of course, this assumes that even packages that do not upload their sdists have the metadata available in PyPI somehow; perhaps the server caches them. we have our own pypi-like mirror that does this)
History
Date User Action Args
2011-06-07 17:03:44sridsetrecipients: + srid, techtonik, tarek, eric.araujo, josip, dabrahams, alexis
2011-06-07 17:03:43sridlinkissue8927 messages
2011-06-07 17:03:43sridcreate