You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our Github CI workflow now tests if 4CAT can be installed via Docker, but doesn't actually test any functionality. One way to make our deployments more robust is to have a 'test' dataset for some (or all) data sources and run a bunch of standard processors on them. This should be relatively straightforward and would test parts both the backend and frontend.
Provide sample data files for each data source
Add code to turn these into datasets (optimal way to do this will depend on the data source)
Zeeschuimer-powered datasets should be created in such a way that map_item() is called
CSV upload should be tested in such a way that the CSV is parsed (perhaps with separate files for all supported formats?)
Scraper-based datasets should be created in a way that bypasses the actual scraping, since we don't want to rely on the platform/site being online to test it
Select standard processors that can be run on all of those. Either have some that can run on any dataset, or define relevant processors per data source (or a mix of both)
Determine how to integrate this in the CI workflow...
The text was updated successfully, but these errors were encountered:
Our Github CI workflow now tests if 4CAT can be installed via Docker, but doesn't actually test any functionality. One way to make our deployments more robust is to have a 'test' dataset for some (or all) data sources and run a bunch of standard processors on them. This should be relatively straightforward and would test parts both the backend and frontend.
map_item()
is calledThe text was updated successfully, but these errors were encountered: