-
Updated
Mar 14, 2017 - Python
distributed-computing
Here are 188 public repositories matching this topic...
-
Updated
Oct 2, 2020 - Python
In our API docs we currently use
.. autosummary::
Client
Client.call_stack
Client.cancel
...
To generate a table of Client methods at the top of the page. Later on we use
.. autoclass:: Client
:members:
to display the docstrings for all the public methods on Client (here an example for
-
Updated
Oct 24, 2017 - Python
-
Updated
Oct 6, 2020 - Python
If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
This could be example that uses these supported syntax and APIs:
https://github.com/couler-proj/couler/tree/d34a690/couler/core/syntax
It seems that the number of joining clients (not the num of computing clients) is fixed in fedml_api/data_preprocessing/**/data_loader and cannot be changed except CIFAR10 datasets.
Here I mean that it seems the total clients is decided by the datasets, rather the input from run_fedavg_distributed_pytorch.sh.
https://github.com/FedML-AI/FedML/blob/3d9fda8d149c95f25ec4898e31df76f035a33b5d/fed
-
Updated
Nov 6, 2020 - Python
-
Updated
Oct 29, 2020 - Python
-
Updated
Aug 1, 2019 - Python
-
Updated
Sep 30, 2020 - Python
-
Updated
Oct 3, 2020 - Python
-
Updated
Nov 22, 2020 - Python
-
Updated
Apr 10, 2020 - Python
-
Updated
Nov 5, 2020 - Python
-
Updated
Dec 19, 2017 - Python
We should make a pass (namely after the SQL change) to ensure status attributes are consistent throughout the code and all pull from one Enum pydantic model object if possible.
-
Updated
Nov 12, 2020 - Python
-
Updated
Nov 22, 2020 - Python
-
Updated
Jan 30, 2020 - Python
-
Updated
Jul 24, 2020 - Python
In several places of the code, there are debug calls to the logger that are inside loops and/or cause expensive evaluations. As the statement is fully evaluated whether or not the log message is printed this is poor practise. The following needs to be done
- Identify
debugstatements that are either in loops or that have expensive evaluation (so just about anything beyond a simple string)
-
Updated
Sep 24, 2020 - Python
-
Updated
Aug 12, 2017 - Python
-
Updated
Jan 3, 2020 - Python
-
Updated
Mar 1, 2018 - Python
Is your feature request related to a problem? Please describe.
Currently learningOrchestra needs a knowledge in architecture and infrastructure to deploy.
Describe the solution you'd like
There is a way to facilitate or abstract the infrastructure requirements to deploy the learningOrchestra?
Describe alternatives you've considered
Additional context
-
Updated
Oct 22, 2020 - Python
Improve this page
Add a description, image, and links to the distributed-computing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the distributed-computing topic, visit your repo's landing page and select "manage topics."
Describe the bug
I found that some names agruments in framework aren't consistent.
So for example: