I use term-frequency inverse-document frequency (tf-idf) to weight
the importance of the words in the wordcloud. If we used pure
frequencies, the wordcloud would largely consist of words conveying
diff --git a/docs/404.html b/docs/404.html
index b0c0e68..89e0462 100644
--- a/docs/404.html
+++ b/docs/404.html
@@ -39,7 +39,7 @@
eurlex
- 0.4.5
+ 0.4.6
@@ -56,11 +56,14 @@
diff --git a/docs/articles/council.html b/docs/articles/council.html
index 57806cd..dce79f6 100644
--- a/docs/articles/council.html
+++ b/docs/articles/council.html
@@ -5,7 +5,7 @@
-Data on votes in the Council of the EU • eurlex
+Voting in the Council of the EU • eurlex
@@ -17,7 +17,7 @@
-
+
+
+
+
+
+
+Make SPARQL queries with eurlex • eurlex
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
This vignette shows how to use the eurlex R package to
+make SPARQL queries to retrieve data on European Union law.
+
+
Introduction
+
+
Dozens of political scientists and legal scholars use data on
+European Union laws in their research. The provenance of these data is
+rarely discussed. More often than not, researchers resort to the quick
+and dirty technique of scraping entire html pages from
+eur-lex.europa.eu. This is not the optimal, nor preferred
+(from the perspective of the server host) approach of retrieving data,
+however, especially as the Publication Office of the European Union, the
+public body behind Eur-Lex, operates several dedicated APIs for
+automated retrieval of its data.
+
The allure of web scraping is completely understandable. Not only is
+it easier to download data that can be readily seen in a user-friendly
+manner through a browser, using the dedicated APIs requires technical
+knowledge of semantic web and Client URL technologies, which is not
+necessarily widespread among researchers. And why go through the pain of
+learning how to compile SPARQL queries when it is much easier to simply
+download the web page?
+
The eurlex R package attempts to significantly reduce
+the overhead associated with using the SPARQL and REST APIs made
+available by the EU Publication Office. Although at present it does not
+offer access to the same array of information as comprehensive web
+scraping might, the package provides simpler, more efficient and
+transparent access to data on European Union law. This vignette gives a
+quick guide to the package and an even quicker introduction to the
+Eur-Lex dataverse.
+
+
+
The eurlex package
+
+
The eurlex package currently envisions the typical
+use-case to consist of getting bulk information about EU law and policy
+into R as fast as possible. The package contains three core functions to
+achieve that objective: elx_make_query() to create SPARQL
+queries based on user input; elx_run_query() to execute the
+pre-made or any other manually input query; and
+elx_fetch_data() to fire GET requests for certain metadata
+to the REST API.
+
The package also contains largely self-explanatory functions for
+retrieving data on EU court cases (elx_curia_list()) and
+Council votes (elx_council_votes()) from outside Eur-Lex.
+More advanced users might be interested in downloading and
+custom-parsing XML notices with elx_download_xml().
+
+
+elx_make_query(): Generate SPARQL queries
+
+
The function elx_make_query takes as its first argument
+the type of resource to be retrieved from the semantic database that
+powers Eur-Lex (and other publications) called Cellar.
Currently, it is possible to choose from among a host of resource
+types, including directives, regulations and even case law (see function
+description for the full list). It is also possible to manually specify
+a resource type from the eligible
+list.1
+
The choice of resource type is then reflected in the SPARQL query
+generated by the function:
There are various ways of querying the same information in the Cellar
+database due to the existence of several overlapping classes and
+identifiers describing the same resources. The queries generated by the
+function should offer a reliable way of obtaining exhaustive results, as
+they have been validated by the helpdesk of the Publication Office. At
+the same time, it is always possible there will be issues either on the
+query or the database side; please report any you encounter through
+Github.
+
The other arguments in elx_make_query() relate to
+additional metadata to be returned. The results include by default the
+CELEX
+number and exclude corrigenda (corrections of errors in
+legislation). Other data needs to be opted into. Make sure to select
+ones that are logically compatible (e.g. case law does not have a legal
+basis). More options should be added in the future.
+
Note that availability of data for each variable might have an impact
+on the results. The data frame returned by the query might be shrunken
+to the size of the variable with most missing data. It is recommended to
+always compare results from a desired query to a minimal query
+requesting only celex ids.
You can also decide to not specify any resource types, in which case
+all types of documents will be returned. As there are over a million
+documents with a CELEX identifier, this is likely not efficient for a
+majority of users. But since version 0.3.5 it is possible to request
+documents belonging to a particular “sector”
+or directory
+code.
Changelog
diff --git a/docs/sitemap.xml b/docs/sitemap.xml
index 7b649e7..0c82d82 100644
--- a/docs/sitemap.xml
+++ b/docs/sitemap.xml
@@ -12,6 +12,9 @@
/articles/index.html
+
+ /articles/sparql-queries.html
+ /authors.html
diff --git a/man/elx_fetch_data.Rd b/man/elx_fetch_data.Rd
index 3a77090..ae941ed 100644
--- a/man/elx_fetch_data.Rd
+++ b/man/elx_fetch_data.Rd
@@ -36,7 +36,7 @@ elx_fetch_data(
A character vector of length one containing the result. When \code{type = "text"}, named character vector where the name contains the source of the text.
}
\description{
-Wraps httr::GET with pre-specified headers and parses retrieved data.
+Get titles, texts, identifiers and XML notices for EU resources.
}
\examples{
\donttest{
diff --git a/tests/testthat/test-fetch.R b/tests/testthat/test-fetch.R
index c4f1c3b..5b8a139 100644
--- a/tests/testthat/test-fetch.R
+++ b/tests/testthat/test-fetch.R
@@ -1,4 +1,4 @@
-testthat::test_that("fetching data works", {
+testthat::test_that("fetching notices works", {
testthat::skip_on_cran()
diff --git a/tests/testthat/test-query.R b/tests/testthat/test-query.R
index 64631e1..6bf5737 100644
--- a/tests/testthat/test-query.R
+++ b/tests/testthat/test-query.R
@@ -1,4 +1,4 @@
-testthat::test_that("directives work", {
+testthat::test_that("queries can be made", {
testthat::skip_on_cran()
diff --git a/vignettes/council.Rmd b/vignettes/articles/council.Rmd
similarity index 95%
rename from vignettes/council.Rmd
rename to vignettes/articles/council.Rmd
index 351d680..bcc5259 100644
--- a/vignettes/council.Rmd
+++ b/vignettes/articles/council.Rmd
@@ -1,13 +1,13 @@
---
-title: "Data on votes in the Council of the EU"
+title: "Voting in the Council of the EU"
output: rmarkdown::html_vignette
vignette: >
- %\VignetteIndexEntry{Data on votes in the Council of the EU}
+ %\VignetteIndexEntry{Voting in the Council of the EU}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
-```{r, include = FALSE}
+```{r, echo = FALSE, message = FALSE, warning=FALSE, error=FALSE, include=FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
@@ -18,7 +18,9 @@ Few would disagree that the Council of the European Union (EU) -- sometimes also
Under the OLP, which is nowadays the most common type of law-making procedure, the Council should make decisions by qualified majority. In practice, it often decides by consensus, as Member States tend to avoid open disagreements. Still, enough votes are taken to give us some insight into the variation in Member State governments' behaviour. We access these through a dedicated API maintained by the Council, which is also wrapped in the `eurlex` package.
-## Council votes
+## Data on Council votes
+
+First we obtain the available data on Council votes using `eurlex::elx_council_votes()` and process the API response.
```{r votingdata}
# packages
@@ -60,7 +62,6 @@ country_votes_prop <- country_votes_n %>%
n_votes = sum(n),
prop = round(value / n_votes, 3)) %>%
ungroup()
-
```
Excluding votes where all governments voted in favour, we are left with between ```r max(country_votes_prop$n_votes, na.rm = T)``` and ```r min(country_votes_prop$n_votes, na.rm = T)``` votes per Member State. While these numbers do not represent the entire historical voting record, they should still help us lift the veil on variation in Member States' propensity to disagree. Note that due to opt-outs not all countries have participated in every vote.
diff --git a/vignettes/eurlexpkg.Rmd b/vignettes/articles/eurlexpkg.Rmd
similarity index 98%
rename from vignettes/eurlexpkg.Rmd
rename to vignettes/articles/eurlexpkg.Rmd
index bf00a2f..75e5f22 100644
--- a/vignettes/eurlexpkg.Rmd
+++ b/vignettes/articles/eurlexpkg.Rmd
@@ -1,267 +1,267 @@
----
-title: "eurlex: Retrieve data on European Union law in R"
-output: rmarkdown::html_vignette
-description: >
- Retrieve data on European Union law in R with
- pre-defined SPARQL and REST queries.
-vignette: >
- %\VignetteIndexEntry{eurlex: Retrieve data on European Union law in R}
- %\VignetteEngine{knitr::rmarkdown}
- \usepackage[utf8]{inputenc}
----
-
-```{r, echo = FALSE, message = FALSE, warning=FALSE, error=FALSE}
-knitr::opts_chunk$set(collapse = T, comment = "#>")
-options(tibble.print_min = 4, tibble.print_max = 4)
-```
-
-This vignette shows how to use the `eurlex` R package to retrieve data on European Union law.
-
-# Introduction
-
-Dozens of political scientists and legal scholars use data on European Union laws in their research. The provenance of these data is rarely discussed. More often than not, researchers resort to the quick and dirty technique of scraping entire html pages from `eur-lex.europa.eu`. This is not the optimal, nor preferred (from the perspective of the server host) approach of retrieving data, however, especially as the Publication Office of the European Union, the public body behind Eur-Lex, operates several dedicated APIs for automated retrieval of its data.
-
-The allure of web scraping is completely understandable. Not only is it easier to download data that can be readily seen in a user-friendly manner through a browser, using the dedicated APIs requires technical knowledge of semantic web and Client URL technologies, which is not necessarily widespread among researchers. And why go through the pain of learning how to compile SPARQL queries when it is much easier to simply download the web page?
-
-The `eurlex` R package attempts to significantly reduce the overhead associated with using the SPARQL and REST APIs made available by the EU Publication Office. Although at present it does not offer access to the same array of information as comprehensive web scraping might, the package provides simpler, more efficient and transparent access to data on European Union law. This vignette gives a quick guide to the package and an even quicker introduction to the Eur-Lex dataverse.
-
-# The `eurlex` package
-
-The `eurlex` package currently envisions the typical use-case to consist of getting bulk information about EU law and policy into R as fast as possible. The package contains three core functions to achieve that objective: `elx_make_query()` to create SPARQL queries based on user input; `elx_run_query()` to execute the pre-made or any other manually input query; and `elx_fetch_data()` to fire GET requests for certain metadata to the REST API.
-
-The package also contains largely self-explanatory functions for retrieving data on EU court cases (`elx_curia_list()`) and Council votes (`elx_council_votes()`) from outside Eur-Lex. More advanced users might be interested in downloading and custom-parsing XML notices with `elx_download_xml()`.
-
-## `elx_make_query()`: Generate SPARQL queries
-
-The function `elx_make_query` takes as its first argument the type of resource to be retrieved from the semantic database that powers Eur-Lex (and other publications) called Cellar.
-
-```{r makequery, message = FALSE, warning=FALSE, error=FALSE}
-library(eurlex)
-library(dplyr)
-
-query_dir <- elx_make_query(resource_type = "directive")
-```
-
-
-```{r precompute, include=FALSE}
-dirs <- elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
- elx_run_query()
-
-results <- dirs %>% select(-force,-date)
-```
-
-Currently, it is possible to choose from among a host of resource types, including directives, regulations and even case law (see function description for the full list). It is also possible to manually specify a resource type from the [eligible list](http://publications.europa.eu/resource/authority/resource-type).^[Note, however, that not all resource types will work properly with the pre-specified query.]
-
-The choice of resource type is then reflected in the SPARQL query generated by the function:
-
-```{r}
-query_dir %>%
- cat()
-
-elx_make_query(resource_type = "caselaw") %>%
- cat()
-
-elx_make_query(resource_type = "manual", manual_type = "SWD") %>%
- cat()
-
-```
-
-There are various ways of querying the same information in the Cellar database due to the existence of several overlapping classes and identifiers describing the same resources. The queries generated by the function should offer a reliable way of obtaining exhaustive results, as they have been validated by the helpdesk of the Publication Office. At the same time, it is always possible there will be issues either on the query or the database side; please report any you encounter through Github.
-
-The other arguments in `elx_make_query()` relate to additional metadata to be returned. The results include by default the [CELEX number](https://eur-lex.europa.eu/content/tools/TableOfSectors/types_of_documents_in_eurlex.html) and exclude corrigenda (corrections of errors in legislation). Other data needs to be opted into. Make sure to select ones that are logically compatible (e.g. case law does not have a legal basis). More options should be added in the future.
-
-Note that availability of data for each variable might have an impact on the results. The data frame returned by the query might be shrunken to the size of the variable with most missing data. It is recommended to always compare results from a desired query to a minimal query requesting only celex ids.
-
-```{r}
-elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
- cat()
-
-# minimal query: elx_make_query(resource_type = "directive")
-
-elx_make_query(resource_type = "recommendation", include_date = TRUE, include_lbs = TRUE) %>%
- cat()
-
-# minimal query: elx_make_query(resource_type = "recommendation")
-
-```
-
-You can also decide to not specify any resource types, in which case all types of documents will be returned. As there are over a million documents with a CELEX identifier, this is likely not efficient for a majority of users. But since version 0.3.5 it is possible to request documents belonging to a particular ["sector"](https://eur-lex.europa.eu/content/tools/TableOfSectors/types_of_documents_in_eurlex.html) or [directory code](https://eur-lex.europa.eu/browse/directories/legislation.html).
-
-```{r}
-# request documents from directory 18 ("Common Foreign and Security Policy")
-# and sector 3 ("Legal acts")
-
-elx_make_query(resource_type = "any",
- directory = "18",
- sector = 3) %>%
- cat()
-```
-
-Now that we have a query, we are ready to run it.
-
-## `elx_run_query()`: Execute SPARQL queries
-
-`elx_run_query()` sends SPARQL queries to a pre-specified endpoint. The function takes the query string as the main argument, which means you can manually pass it any working SPARQL query (relevant to official EU publications).
-
-```{r runquery, eval=FALSE}
-results <- elx_run_query(query = query_dir)
-
-# the functions are compatible with piping
-#
-# elx_make_query("directive") %>%
-# elx_run_query()
-```
-
-```{r}
-as_tibble(results)
-```
-
-The function outputs a `data.frame` where each column corresponds to one of the requested variables, while the rows accumulate observations of the resource type satisfying the query criteria. Obviously, the more data is to be returned, the longer the execution time, varying from a few seconds to several minutes, depending also on your connection.
-
-The first column always contains the unique URI of a "work" (legislative act or court judgment) which identifies each resource in Cellar. Several human-readable identifiers are normally associated with each "work" but the most useful one is CELEX, retrieved by default.^[Occasionally, you may encounter legal acts without CELEX numbers, especially when digging through older legislation. It is good to report these to the Eur-Lex helpdesk.]
-
-One column you should always pay attention to is `type` (as in `resource_type`). The URIs contained there reflect the FILTER argument in the SPARQL query, which is manually pre-specified. All resources are indexed as being of one type or another. For example, when retrieving directives, the results are going to return also delegated directives, which might not be desirable, depending on your needs. You can filter results by `type` to make the necessary adjustments. The queries are expansive by default in the spirit of erring on the side of over-inclusiveness rather than vice versa.
-
-```{r}
-head(results$type,5)
-
-results %>%
- distinct(type)
-```
-
-The data is returned in the long format, which means that rows are recycled up to the length of the variable with the most data points. For example, if 20 directives are returned, each with two legal bases, the resulting `data.frame` will have 40 rows. Some variables, such as dates, contain unexpectedly several entries for some documents. You should always check the number of unique identifiers in the results instead of assuming that each row is a unique observation.
-
-### EuroVoc descriptors
-
-EuroVoc is a multilingual thesaurus, keywords from which are used to describe the content of European Union documents. Most resource types that can be retrieved with the pre-defined queries in this package can be accompanied by EuroVoc keywords and these can be retrieved as other variables.
-
-```{r eurovoc}
-
-rec_eurovoc <- elx_make_query("recommendation", include_eurovoc = TRUE, limit = 10) %>%
- elx_run_query() # truncated results for sake of the example
-
-rec_eurovoc %>%
- select(celex, eurovoc)
-
-```
-
-By default, the endpoint returns the EuroVoc concept codes rather than the labels (keywords). The function `elx_label_eurovoc()` needs to be called to obtain a look-up table with the labels.
-
-```{r eurovoctable}
-eurovoc_lookup <- elx_label_eurovoc(uri_eurovoc = rec_eurovoc$eurovoc)
-
-print(eurovoc_lookup)
-```
-
-The results include labels only for unique identifiers, but with `dplyr::left_join()` it is straightforward to append the labels to the entire dataset.
-
-```{r appendlabs}
-rec_eurovoc %>%
- left_join(eurovoc_lookup)
-```
-
-As elsewhere in the API, we can tap into the multilingual nature of EU documents also when it comes to the EuroVoc keywords. Moreover, most concepts in the thesaurus are associated with alternative labels; these can be returned as well (separated by a comma).
-
-```{r}
-eurovoc_lookup <- elx_label_eurovoc(uri_eurovoc = rec_eurovoc$eurovoc,
- alt_labels = TRUE,
- language = "sk")
-
-rec_eurovoc %>%
- left_join(eurovoc_lookup) %>%
- select(celex, eurovoc, labels)
-```
-
-## `elx_fetch_data()`: Fire GET requests
-
-A core contribution of the SPARQL requests is that we obtain a comprehensive list of identifiers that we can subsequently use to obtain more data relating to the document in question. While the results of the SPARQL queries are useful also for webscraping (with the `rvest` package), the function `elx_fetch_data()` enables us to fire GET requests to retrieve data on documents with known identifiers (including Cellar URI).
-
-One of the most sought-after data in the Eur-Lex dataverse is the text. It is possible now to automate the pipeline for downloading html and plain texts from Eur-Lex. Similarly, you can retrieve the title of the document. For both you can specify also the desired language (English by default). Other metadata might be added in the future.
-
-```{r getdatapur, message = FALSE, warning=FALSE, error=FALSE}
-# the function is not vectorized by default
-# elx_fetch_data(url = results$work[1], type = "title")
-
-# we can use purrr::map() to play that role
-library(purrr)
-
-# wrapping in possibly() would catch errors in case there is a server issue
-dir_titles <- results[1:5,] %>% # take the first 5 directives only to save time
- mutate(work = paste("http://publications.europa.eu/resource/cellar/", work, sep = "")) |>
- mutate(title = map_chr(work, possibly(elx_fetch_data, otherwise = NA_character_),
- "title")) %>%
- as_tibble() %>%
- select(celex, title)
-
-print(dir_titles)
-
-```
-
-Note that text requests are by far the most time-intensive; requesting the full text for thousands of documents is liable to extend the run-time into hours. Texts are retrieved from html by priority, but methods for .pdfs and .docs are also implemented.^[It is worth pointing out that the html and pdf contents of older case law differs. Whereas typically the html file is only going to contain a summary and grounds of a judgment, the pdf should also contain background to the dispute.] The function even handles multi-document resources (by pasting them together).
-
-# Application
-
-In this section I showcase a simple application of `eurlex` on making overviews of EU legislation. First, we collate data on directives.
-
-```{r dirsdata, eval=FALSE}
-dirs <- elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
- elx_run_query()
-```
-
-Let's calculate the proportion of directives currently in force in the entire set of directives ever adopted. This variable offers a particularly good demonstration of the usefulness of the package to retrieve EU law data, because it changes every day, as new acts enter into force and old ones drop out. Regularly scraping webpages for this purpose and scale is simply impractical and disproportional.
-
-```{r firstplot, message = FALSE, warning=FALSE, error=FALSE}
-library(ggplot2)
-
-dirs %>%
- count(force) %>%
- ggplot(aes(x = force, y = n)) +
- geom_col()
-```
-
-Directives become naturally outdated with time. It might be all the more interesting to see which older acts are thus still surviving.
-
-```{r dirforce}
-dirs %>%
- filter(!is.na(force)) %>%
- mutate(date = as.Date(date)) %>%
- ggplot(aes(x = date, y = celex)) +
- geom_point(aes(color = force), alpha = 0.1) +
- theme(axis.text.y = element_blank(),
- axis.line.y = element_blank(),
- axis.ticks.y = element_blank())
-```
-
-We want to know a bit more about some directives from the early 1970s that are still in force today. Their titles could give us a clue.
-
-```{r dirtitles}
-dirs_1970_title <- dirs %>%
- filter(between(as.Date(date), as.Date("1970-01-01"), as.Date("1973-01-01")),
- force == "true") %>%
- mutate(work = paste("http://publications.europa.eu/resource/cellar/", work, sep = "")) |>
- mutate(title = map_chr(work, possibly(elx_fetch_data, otherwise = NA_character_),
- "title")) %>%
- as_tibble()
-
-print(dirs_1970_title)
-```
-
-I will use the `tidytext` package to get a quick idea of what the legislation is about.
-
-```{r wordcloud, message = FALSE, warning=FALSE, error=FALSE}
-library(tidytext)
-library(wordcloud)
-
-# wordcloud
-dirs_1970_title %>%
- select(celex,title) %>%
- unnest_tokens(word, title) %>%
- count(celex, word, sort = TRUE) %>%
- filter(!grepl("\\d", word)) %>%
- bind_tf_idf(word, celex, n) %>%
- with(wordcloud(word, tf_idf, max.words = 40))
-```
-
-I use term-frequency inverse-document frequency (tf-idf) to weight the importance of the words in the wordcloud. If we used pure frequencies, the wordcloud would largely consist of words conveying little meaning ("the", "and", ...).
-
-This is an extremely basic application of the `eurlex` package. Much more sophisticated methods can be used to analyse both the content and metadata of European Union legislation. If the package is useful for your research, please cite the [accompanying paper](https://www.tandfonline.com/doi/full/10.1080/2474736X.2020.1870150).^[Michal Ovádek (2021) Facilitating access to data on European Union laws, Political Research Exchange, 3:1, DOI: [10.1080/2474736X.2020.1870150](https://www.tandfonline.com/doi/full/10.1080/2474736X.2020.1870150)]
+---
+title: "eurlex: Retrieve data on European Union law in R"
+output: rmarkdown::html_vignette
+description: >
+ Retrieve data on European Union law in R with
+ pre-defined SPARQL and REST queries.
+vignette: >
+ %\VignetteIndexEntry{eurlex: Retrieve data on European Union law in R}
+ %\VignetteEngine{knitr::rmarkdown}
+ \usepackage[utf8]{inputenc}
+---
+
+```{r, echo = FALSE, message = FALSE, warning=FALSE, error=FALSE}
+knitr::opts_chunk$set(collapse = T, comment = "#>")
+options(tibble.print_min = 4, tibble.print_max = 4)
+```
+
+This vignette shows how to use the `eurlex` R package to retrieve data on European Union law.
+
+# Introduction
+
+Dozens of political scientists and legal scholars use data on European Union laws in their research. The provenance of these data is rarely discussed. More often than not, researchers resort to the quick and dirty technique of scraping entire html pages from `eur-lex.europa.eu`. This is not the optimal, nor preferred (from the perspective of the server host) approach of retrieving data, however, especially as the Publication Office of the European Union, the public body behind Eur-Lex, operates several dedicated APIs for automated retrieval of its data.
+
+The allure of web scraping is completely understandable. Not only is it easier to download data that can be readily seen in a user-friendly manner through a browser, using the dedicated APIs requires technical knowledge of semantic web and Client URL technologies, which is not necessarily widespread among researchers. And why go through the pain of learning how to compile SPARQL queries when it is much easier to simply download the web page?
+
+The `eurlex` R package attempts to significantly reduce the overhead associated with using the SPARQL and REST APIs made available by the EU Publication Office. Although at present it does not offer access to the same array of information as comprehensive web scraping might, the package provides simpler, more efficient and transparent access to data on European Union law. This vignette gives a quick guide to the package and an even quicker introduction to the Eur-Lex dataverse.
+
+# The `eurlex` package
+
+The `eurlex` package currently envisions the typical use-case to consist of getting bulk information about EU law and policy into R as fast as possible. The package contains three core functions to achieve that objective: `elx_make_query()` to create SPARQL queries based on user input; `elx_run_query()` to execute the pre-made or any other manually input query; and `elx_fetch_data()` to fire GET requests for certain metadata to the REST API.
+
+The package also contains largely self-explanatory functions for retrieving data on EU court cases (`elx_curia_list()`) and Council votes (`elx_council_votes()`) from outside Eur-Lex. More advanced users might be interested in downloading and custom-parsing XML notices with `elx_download_xml()`.
+
+## `elx_make_query()`: Generate SPARQL queries
+
+The function `elx_make_query` takes as its first argument the type of resource to be retrieved from the semantic database that powers Eur-Lex (and other publications) called Cellar.
+
+```{r makequery, message = FALSE, warning=FALSE, error=FALSE}
+library(eurlex)
+library(dplyr)
+
+query_dir <- elx_make_query(resource_type = "directive")
+```
+
+
+```{r precompute, include=FALSE}
+dirs <- elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
+ elx_run_query()
+
+results <- dirs %>% select(-force,-date)
+```
+
+Currently, it is possible to choose from among a host of resource types, including directives, regulations and even case law (see function description for the full list). It is also possible to manually specify a resource type from the [eligible list](http://publications.europa.eu/resource/authority/resource-type).^[Note, however, that not all resource types will work properly with the pre-specified query.]
+
+The choice of resource type is then reflected in the SPARQL query generated by the function:
+
+```{r}
+query_dir %>%
+ cat()
+
+elx_make_query(resource_type = "caselaw") %>%
+ cat()
+
+elx_make_query(resource_type = "manual", manual_type = "SWD") %>%
+ cat()
+
+```
+
+There are various ways of querying the same information in the Cellar database due to the existence of several overlapping classes and identifiers describing the same resources. The queries generated by the function should offer a reliable way of obtaining exhaustive results, as they have been validated by the helpdesk of the Publication Office. At the same time, it is always possible there will be issues either on the query or the database side; please report any you encounter through Github.
+
+The other arguments in `elx_make_query()` relate to additional metadata to be returned. The results include by default the [CELEX number](https://eur-lex.europa.eu/content/tools/TableOfSectors/types_of_documents_in_eurlex.html) and exclude corrigenda (corrections of errors in legislation). Other data needs to be opted into. Make sure to select ones that are logically compatible (e.g. case law does not have a legal basis). More options should be added in the future.
+
+Note that availability of data for each variable might have an impact on the results. The data frame returned by the query might be shrunken to the size of the variable with most missing data. It is recommended to always compare results from a desired query to a minimal query requesting only celex ids.
+
+```{r}
+elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
+ cat()
+
+# minimal query: elx_make_query(resource_type = "directive")
+
+elx_make_query(resource_type = "recommendation", include_date = TRUE, include_lbs = TRUE) %>%
+ cat()
+
+# minimal query: elx_make_query(resource_type = "recommendation")
+
+```
+
+You can also decide to not specify any resource types, in which case all types of documents will be returned. As there are over a million documents with a CELEX identifier, this is likely not efficient for a majority of users. But since version 0.3.5 it is possible to request documents belonging to a particular ["sector"](https://eur-lex.europa.eu/content/tools/TableOfSectors/types_of_documents_in_eurlex.html) or [directory code](https://eur-lex.europa.eu/browse/directories/legislation.html).
+
+```{r}
+# request documents from directory 18 ("Common Foreign and Security Policy")
+# and sector 3 ("Legal acts")
+
+elx_make_query(resource_type = "any",
+ directory = "18",
+ sector = 3) %>%
+ cat()
+```
+
+Now that we have a query, we are ready to run it.
+
+## `elx_run_query()`: Execute SPARQL queries
+
+`elx_run_query()` sends SPARQL queries to a pre-specified endpoint. The function takes the query string as the main argument, which means you can manually pass it any working SPARQL query (relevant to official EU publications).
+
+```{r runquery, eval=FALSE}
+results <- elx_run_query(query = query_dir)
+
+# the functions are compatible with piping
+#
+# elx_make_query("directive") %>%
+# elx_run_query()
+```
+
+```{r}
+as_tibble(results)
+```
+
+The function outputs a `data.frame` where each column corresponds to one of the requested variables, while the rows accumulate observations of the resource type satisfying the query criteria. Obviously, the more data is to be returned, the longer the execution time, varying from a few seconds to several minutes, depending also on your connection.
+
+The first column always contains the unique URI of a "work" (legislative act or court judgment) which identifies each resource in Cellar. Several human-readable identifiers are normally associated with each "work" but the most useful one is CELEX, retrieved by default.^[Occasionally, you may encounter legal acts without CELEX numbers, especially when digging through older legislation. It is good to report these to the Eur-Lex helpdesk.]
+
+One column you should always pay attention to is `type` (as in `resource_type`). The URIs contained there reflect the FILTER argument in the SPARQL query, which is manually pre-specified. All resources are indexed as being of one type or another. For example, when retrieving directives, the results are going to return also delegated directives, which might not be desirable, depending on your needs. You can filter results by `type` to make the necessary adjustments. The queries are expansive by default in the spirit of erring on the side of over-inclusiveness rather than vice versa.
+
+```{r}
+head(results$type,5)
+
+results %>%
+ distinct(type)
+```
+
+The data is returned in the long format, which means that rows are recycled up to the length of the variable with the most data points. For example, if 20 directives are returned, each with two legal bases, the resulting `data.frame` will have 40 rows. Some variables, such as dates, contain unexpectedly several entries for some documents. You should always check the number of unique identifiers in the results instead of assuming that each row is a unique observation.
+
+### EuroVoc descriptors
+
+EuroVoc is a multilingual thesaurus, keywords from which are used to describe the content of European Union documents. Most resource types that can be retrieved with the pre-defined queries in this package can be accompanied by EuroVoc keywords and these can be retrieved as other variables.
+
+```{r eurovoc}
+
+rec_eurovoc <- elx_make_query("recommendation", include_eurovoc = TRUE, limit = 10) %>%
+ elx_run_query() # truncated results for sake of the example
+
+rec_eurovoc %>%
+ select(celex, eurovoc)
+
+```
+
+By default, the endpoint returns the EuroVoc concept codes rather than the labels (keywords). The function `elx_label_eurovoc()` needs to be called to obtain a look-up table with the labels.
+
+```{r eurovoctable}
+eurovoc_lookup <- elx_label_eurovoc(uri_eurovoc = rec_eurovoc$eurovoc)
+
+print(eurovoc_lookup)
+```
+
+The results include labels only for unique identifiers, but with `dplyr::left_join()` it is straightforward to append the labels to the entire dataset.
+
+```{r appendlabs}
+rec_eurovoc %>%
+ left_join(eurovoc_lookup)
+```
+
+As elsewhere in the API, we can tap into the multilingual nature of EU documents also when it comes to the EuroVoc keywords. Moreover, most concepts in the thesaurus are associated with alternative labels; these can be returned as well (separated by a comma).
+
+```{r}
+eurovoc_lookup <- elx_label_eurovoc(uri_eurovoc = rec_eurovoc$eurovoc,
+ alt_labels = TRUE,
+ language = "sk")
+
+rec_eurovoc %>%
+ left_join(eurovoc_lookup) %>%
+ select(celex, eurovoc, labels)
+```
+
+## `elx_fetch_data()`: Fire GET requests
+
+A core contribution of the SPARQL requests is that we obtain a comprehensive list of identifiers that we can subsequently use to obtain more data relating to the document in question. While the results of the SPARQL queries are useful also for webscraping (with the `rvest` package), the function `elx_fetch_data()` enables us to fire GET requests to retrieve data on documents with known identifiers (including Cellar URI).
+
+One of the most sought-after data in the Eur-Lex dataverse is the text. It is possible now to automate the pipeline for downloading html and plain texts from Eur-Lex. Similarly, you can retrieve the title of the document. For both you can specify also the desired language (English by default). Other metadata might be added in the future.
+
+```{r getdatapur, message = FALSE, warning=FALSE, error=FALSE}
+# the function is not vectorized by default
+# elx_fetch_data(url = results$work[1], type = "title")
+
+# we can use purrr::map() to play that role
+library(purrr)
+
+# wrapping in possibly() would catch errors in case there is a server issue
+dir_titles <- results[1:5,] %>% # take the first 5 directives only to save time
+ mutate(work = paste("http://publications.europa.eu/resource/cellar/", work, sep = "")) |>
+ mutate(title = map_chr(work, possibly(elx_fetch_data, otherwise = NA_character_),
+ "title")) %>%
+ as_tibble() %>%
+ select(celex, title)
+
+print(dir_titles)
+
+```
+
+Note that text requests are by far the most time-intensive; requesting the full text for thousands of documents is liable to extend the run-time into hours. Texts are retrieved from html by priority, but methods for .pdfs and .docs are also implemented.^[It is worth pointing out that the html and pdf contents of older case law differs. Whereas typically the html file is only going to contain a summary and grounds of a judgment, the pdf should also contain background to the dispute.] The function even handles multi-document resources (by pasting them together).
+
+# Application
+
+In this section I showcase a simple application of `eurlex` on making overviews of EU legislation. First, we collate data on directives.
+
+```{r dirsdata, eval=FALSE}
+dirs <- elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
+ elx_run_query()
+```
+
+Let's calculate the proportion of directives currently in force in the entire set of directives ever adopted. This variable offers a particularly good demonstration of the usefulness of the package to retrieve EU law data, because it changes every day, as new acts enter into force and old ones drop out. Regularly scraping webpages for this purpose and scale is simply impractical and disproportional.
+
+```{r firstplot, message = FALSE, warning=FALSE, error=FALSE}
+library(ggplot2)
+
+dirs %>%
+ count(force) %>%
+ ggplot(aes(x = force, y = n)) +
+ geom_col()
+```
+
+Directives become naturally outdated with time. It might be all the more interesting to see which older acts are thus still surviving.
+
+```{r dirforce}
+dirs %>%
+ filter(!is.na(force)) %>%
+ mutate(date = as.Date(date)) %>%
+ ggplot(aes(x = date, y = celex)) +
+ geom_point(aes(color = force), alpha = 0.1) +
+ theme(axis.text.y = element_blank(),
+ axis.line.y = element_blank(),
+ axis.ticks.y = element_blank())
+```
+
+We want to know a bit more about some directives from the early 1970s that are still in force today. Their titles could give us a clue.
+
+```{r dirtitles}
+dirs_1970_title <- dirs %>%
+ filter(between(as.Date(date), as.Date("1970-01-01"), as.Date("1973-01-01")),
+ force == "true") %>%
+ mutate(work = paste("http://publications.europa.eu/resource/cellar/", work, sep = "")) |>
+ mutate(title = map_chr(work, possibly(elx_fetch_data, otherwise = NA_character_),
+ "title")) %>%
+ as_tibble()
+
+print(dirs_1970_title)
+```
+
+I will use the `tidytext` package to get a quick idea of what the legislation is about.
+
+```{r wordcloud, message = FALSE, warning=FALSE, error=FALSE}
+library(tidytext)
+library(wordcloud)
+
+# wordcloud
+dirs_1970_title %>%
+ select(celex,title) %>%
+ unnest_tokens(word, title) %>%
+ count(celex, word, sort = TRUE) %>%
+ filter(!grepl("\\d", word)) %>%
+ bind_tf_idf(word, celex, n) %>%
+ with(wordcloud(word, tf_idf, max.words = 40))
+```
+
+I use term-frequency inverse-document frequency (tf-idf) to weight the importance of the words in the wordcloud. If we used pure frequencies, the wordcloud would largely consist of words conveying little meaning ("the", "and", ...).
+
+This is an extremely basic application of the `eurlex` package. Much more sophisticated methods can be used to analyse both the content and metadata of European Union legislation. If the package is useful for your research, please cite the [accompanying paper](https://www.tandfonline.com/doi/full/10.1080/2474736X.2020.1870150).^[Michal Ovádek (2021) Facilitating access to data on European Union laws, Political Research Exchange, 3:1, DOI: [10.1080/2474736X.2020.1870150](https://www.tandfonline.com/doi/full/10.1080/2474736X.2020.1870150)]
diff --git a/vignettes/sparql-queries.Rmd b/vignettes/sparql-queries.Rmd
new file mode 100644
index 0000000..4fd6103
--- /dev/null
+++ b/vignettes/sparql-queries.Rmd
@@ -0,0 +1,101 @@
+---
+title: "Make SPARQL queries with eurlex"
+output: rmarkdown::html_vignette
+vignette: >
+ %\VignetteIndexEntry{Make SPARQL queries with eurlex}
+ %\VignetteEngine{knitr::rmarkdown}
+ %\VignetteEncoding{UTF-8}
+---
+
+```{r, include = FALSE}
+knitr::opts_chunk$set(
+ collapse = TRUE,
+ comment = "#>"
+)
+```
+
+```{r setup}
+library(eurlex)
+```
+
+This vignette shows how to use the `eurlex` R package to make SPARQL queries to retrieve data on European Union law.
+
+# Introduction
+
+Dozens of political scientists and legal scholars use data on European Union laws in their research. The provenance of these data is rarely discussed. More often than not, researchers resort to the quick and dirty technique of scraping entire html pages from `eur-lex.europa.eu`. This is not the optimal, nor preferred (from the perspective of the server host) approach of retrieving data, however, especially as the Publication Office of the European Union, the public body behind Eur-Lex, operates several dedicated APIs for automated retrieval of its data.
+
+The allure of web scraping is completely understandable. Not only is it easier to download data that can be readily seen in a user-friendly manner through a browser, using the dedicated APIs requires technical knowledge of semantic web and Client URL technologies, which is not necessarily widespread among researchers. And why go through the pain of learning how to compile SPARQL queries when it is much easier to simply download the web page?
+
+The `eurlex` R package attempts to significantly reduce the overhead associated with using the SPARQL and REST APIs made available by the EU Publication Office. Although at present it does not offer access to the same array of information as comprehensive web scraping might, the package provides simpler, more efficient and transparent access to data on European Union law. This vignette gives a quick guide to the package and an even quicker introduction to the Eur-Lex dataverse.
+
+# The `eurlex` package
+
+The `eurlex` package currently envisions the typical use-case to consist of getting bulk information about EU law and policy into R as fast as possible. The package contains three core functions to achieve that objective: `elx_make_query()` to create SPARQL queries based on user input; `elx_run_query()` to execute the pre-made or any other manually input query; and `elx_fetch_data()` to fire GET requests for certain metadata to the REST API.
+
+The package also contains largely self-explanatory functions for retrieving data on EU court cases (`elx_curia_list()`) and Council votes (`elx_council_votes()`) from outside Eur-Lex. More advanced users might be interested in downloading and custom-parsing XML notices with `elx_download_xml()`.
+
+## `elx_make_query()`: Generate SPARQL queries
+
+The function `elx_make_query` takes as its first argument the type of resource to be retrieved from the semantic database that powers Eur-Lex (and other publications) called Cellar.
+
+```{r makequery, message = FALSE, warning=FALSE, error=FALSE}
+library(eurlex)
+library(dplyr)
+
+query_dir <- elx_make_query(resource_type = "directive")
+```
+
+
+```{r precompute, include=FALSE}
+dirs <- elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
+ elx_run_query()
+
+results <- dirs %>% select(-force,-date)
+```
+
+Currently, it is possible to choose from among a host of resource types, including directives, regulations and even case law (see function description for the full list). It is also possible to manually specify a resource type from the [eligible list](http://publications.europa.eu/resource/authority/resource-type).^[Note, however, that not all resource types will work properly with the pre-specified query.]
+
+The choice of resource type is then reflected in the SPARQL query generated by the function:
+
+```{r}
+query_dir %>%
+ cat()
+
+elx_make_query(resource_type = "caselaw") %>%
+ cat()
+
+elx_make_query(resource_type = "manual", manual_type = "SWD") %>%
+ cat()
+
+```
+
+There are various ways of querying the same information in the Cellar database due to the existence of several overlapping classes and identifiers describing the same resources. The queries generated by the function should offer a reliable way of obtaining exhaustive results, as they have been validated by the helpdesk of the Publication Office. At the same time, it is always possible there will be issues either on the query or the database side; please report any you encounter through Github.
+
+The other arguments in `elx_make_query()` relate to additional metadata to be returned. The results include by default the [CELEX number](https://eur-lex.europa.eu/content/tools/TableOfSectors/types_of_documents_in_eurlex.html) and exclude corrigenda (corrections of errors in legislation). Other data needs to be opted into. Make sure to select ones that are logically compatible (e.g. case law does not have a legal basis). More options should be added in the future.
+
+Note that availability of data for each variable might have an impact on the results. The data frame returned by the query might be shrunken to the size of the variable with most missing data. It is recommended to always compare results from a desired query to a minimal query requesting only celex ids.
+
+```{r}
+elx_make_query(resource_type = "directive", include_date = TRUE, include_force = TRUE) %>%
+ cat()
+
+# minimal query: elx_make_query(resource_type = "directive")
+
+elx_make_query(resource_type = "recommendation", include_date = TRUE, include_lbs = TRUE) %>%
+ cat()
+
+# minimal query: elx_make_query(resource_type = "recommendation")
+
+```
+
+You can also decide to not specify any resource types, in which case all types of documents will be returned. As there are over a million documents with a CELEX identifier, this is likely not efficient for a majority of users. But since version 0.3.5 it is possible to request documents belonging to a particular ["sector"](https://eur-lex.europa.eu/content/tools/TableOfSectors/types_of_documents_in_eurlex.html) or [directory code](https://eur-lex.europa.eu/browse/directories/legislation.html).
+
+```{r}
+# request documents from directory 18 ("Common Foreign and Security Policy")
+# and sector 3 ("Legal acts")
+
+elx_make_query(resource_type = "any",
+ directory = "18",
+ sector = 3) %>%
+ cat()
+```