You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[MINOR][DOCS] Tighten up some key links to the project and download pages to use HTTPS
## What changes were proposed in this pull request?
Tighten up some key links to the project and download pages to use HTTPS
## How was this patch tested?
N/A
Closes#24665 from srowen/HTTPSURLs.
Authored-by: Sean Owen <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
Copy file name to clipboardexpand all lines: R/README.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ export R_HOME=/home/username/R
17
17
18
18
#### Build Spark
19
19
20
-
Build Spark with [Maven](http://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
20
+
Build Spark with [Maven](https://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
21
21
22
22
```bash
23
23
build/mvn -DskipTests -Psparkr package
@@ -35,15 +35,15 @@ SparkContext, you can run
35
35
36
36
./bin/sparkR --master "local[2]"
37
37
38
-
To set other options like driver memory, executor memory etc. you can pass in the [spark-submit](http://spark.apache.org/docs/latest/submitting-applications.html) arguments to `./bin/sparkR`
38
+
To set other options like driver memory, executor memory etc. you can pass in the [spark-submit](https://spark.apache.org/docs/latest/submitting-applications.html) arguments to `./bin/sparkR`
39
39
40
40
#### Using SparkR from RStudio
41
41
42
42
If you wish to use SparkR from RStudio, please refer [SparkR documentation](https://spark.apache.org/docs/latest/sparkr.html#starting-up-from-rstudio).
43
43
44
44
#### Making changes to SparkR
45
45
46
-
The [instructions](http://spark.apache.org/contributing.html) for making contributions to Spark also apply to SparkR.
46
+
The [instructions](https://spark.apache.org/contributing.html) for making contributions to Spark also apply to SparkR.
47
47
If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using `R/install-dev.sh` and test your changes.
48
48
Once you have made your changes, please include unit tests for them and run existing unit tests using the `R/run-tests.sh` script as described below.
49
49
@@ -58,7 +58,7 @@ To run one of them, use `./bin/spark-submit <filename> <args>`. For example:
You can run R unit tests by following the instructions under [Running R Tests](http://spark.apache.org/docs/latest/building-spark.html#running-r-tests).
61
+
You can run R unit tests by following the instructions under [Running R Tests](https://spark.apache.org/docs/latest/building-spark.html#running-r-tests).
Copy file name to clipboardexpand all lines: R/WINDOWS.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -20,19 +20,19 @@ license: |
20
20
21
21
To build SparkR on Windows, the following steps are required
22
22
23
-
1. Install R (>= 3.1) and [Rtools](http://cran.r-project.org/bin/windows/Rtools/). Make sure to
23
+
1. Install R (>= 3.1) and [Rtools](https://cloud.r-project.org/bin/windows/Rtools/). Make sure to
24
24
include Rtools and R in `PATH`. Note that support for R prior to version 3.4 is deprecated as of Spark 3.0.0.
25
25
26
26
2. Install
27
-
[JDK8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) and set
27
+
[JDK8](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) and set
28
28
`JAVA_HOME` in the system environment variables.
29
29
30
-
3. Download and install [Maven](http://maven.apache.org/download.html). Also include the `bin`
30
+
3. Download and install [Maven](https://maven.apache.org/download.html). Also include the `bin`
31
31
directory in Maven in `PATH`.
32
32
33
-
4. Set `MAVEN_OPTS` as described in [Building Spark](http://spark.apache.org/docs/latest/building-spark.html).
33
+
4. Set `MAVEN_OPTS` as described in [Building Spark](https://spark.apache.org/docs/latest/building-spark.html).
34
34
35
-
5. Open a command shell (`cmd`) in the Spark directory and build Spark with [Maven](http://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
35
+
5. Open a command shell (`cmd`) in the Spark directory and build Spark with [Maven](https://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
36
36
37
37
```bash
38
38
mvn.cmd -DskipTests -Psparkr package
@@ -52,7 +52,7 @@ To run the SparkR unit tests on Windows, the following steps are required —ass
52
52
53
53
4. Set the environment variable `HADOOP_HOME` to the full path to the newly created `hadoop` directory.
54
54
55
-
5. Run unit tests for SparkR by running the command below. You need to install the needed packages following the instructions under [Running R Tests](http://spark.apache.org/docs/latest/building-spark.html#running-r-tests) first:
55
+
5. Run unit tests for SparkR by running the command below. You need to install the needed packages following the instructions under [Running R Tests](https://spark.apache.org/docs/latest/building-spark.html#running-r-tests) first:
@@ -17,12 +17,12 @@ and Structured Streaming for stream processing.
17
17
## Online Documentation
18
18
19
19
You can find the latest Spark documentation, including a programming
20
-
guide, on the [project web page](http://spark.apache.org/documentation.html).
20
+
guide, on the [project web page](https://spark.apache.org/documentation.html).
21
21
This README file only contains basic setup instructions.
22
22
23
23
## Building Spark
24
24
25
-
Spark is built using [Apache Maven](http://maven.apache.org/).
25
+
Spark is built using [Apache Maven](https://maven.apache.org/).
26
26
To build Spark and its example programs, run:
27
27
28
28
build/mvn -DskipTests clean package
@@ -31,9 +31,9 @@ To build Spark and its example programs, run:
31
31
32
32
You can build Spark using more than one thread by using the -T option with Maven, see ["Parallel builds in Maven 3"](https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3).
33
33
More detailed documentation is available from the project site, at
For general development tips, including info on developing Spark using an IDE, see ["Useful Developer Tools"](http://spark.apache.org/developer-tools.html).
36
+
For general development tips, including info on developing Spark using an IDE, see ["Useful Developer Tools"](https://spark.apache.org/developer-tools.html).
37
37
38
38
## Interactive Scala Shell
39
39
@@ -83,7 +83,7 @@ can be run using:
83
83
./dev/run-tests
84
84
85
85
Please see the guidance on how to
86
-
[run tests for a module, or individual tests](http://spark.apache.org/developer-tools.html#individual-tests).
86
+
[run tests for a module, or individual tests](https://spark.apache.org/developer-tools.html#individual-tests).
87
87
88
88
There is also a Kubernetes integration test, see resource-managers/kubernetes/integration-tests/README.md
89
89
@@ -94,16 +94,16 @@ storage systems. Because the protocols have changed in different versions of
94
94
Hadoop, you must build Spark against the same version that your cluster runs.
95
95
96
96
Please refer to the build documentation at
97
-
["Specifying the Hadoop Version and Enabling YARN"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn)
97
+
["Specifying the Hadoop Version and Enabling YARN"](https://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn)
98
98
for detailed guidance on building for a particular distribution of Hadoop, including
99
99
building for particular Hive and Hive Thriftserver distributions.
100
100
101
101
## Configuration
102
102
103
-
Please refer to the [Configuration Guide](http://spark.apache.org/docs/latest/configuration.html)
103
+
Please refer to the [Configuration Guide](https://spark.apache.org/docs/latest/configuration.html)
104
104
in the online documentation for an overview on how to configure Spark.
105
105
106
106
## Contributing
107
107
108
-
Please review the [Contribution to Spark guide](http://spark.apache.org/contributing.html)
108
+
Please review the [Contribution to Spark guide](https://spark.apache.org/contributing.html)
109
109
for information on how to get started contributing to the project.
Copy file name to clipboardexpand all lines: docs/building-spark.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -51,7 +51,7 @@ You can fix these problems by setting the `MAVEN_OPTS` variable as discussed bef
51
51
52
52
### build/mvn
53
53
54
-
Spark now comes packaged with a self-contained Maven installation to ease building and deployment of Spark from source located under the `build/` directory. This script will automatically download and setup all necessary build requirements ([Maven](https://maven.apache.org/), [Scala](http://www.scala-lang.org/), and [Zinc](https://github.com/typesafehub/zinc)) locally within the `build/` directory itself. It honors any `mvn` binary if present already, however, will pull down its own copy of Scala and Zinc regardless to ensure proper version requirements are met. `build/mvn` execution acts as a pass through to the `mvn` call allowing easy transition from previous build methods. As an example, one can build a version of Spark as follows:
54
+
Spark now comes packaged with a self-contained Maven installation to ease building and deployment of Spark from source located under the `build/` directory. This script will automatically download and setup all necessary build requirements ([Maven](https://maven.apache.org/), [Scala](https://www.scala-lang.org/), and [Zinc](https://github.com/typesafehub/zinc)) locally within the `build/` directory itself. It honors any `mvn` binary if present already, however, will pull down its own copy of Scala and Zinc regardless to ensure proper version requirements are met. `build/mvn` execution acts as a pass through to the `mvn` call allowing easy transition from previous build methods. As an example, one can build a version of Spark as follows:
55
55
56
56
./build/mvn -DskipTests clean package
57
57
@@ -125,7 +125,7 @@ should run continuous compilation (i.e. wait for changes). However, this has not
125
125
extensively. A couple of gotchas to note:
126
126
127
127
* it only scans the paths `src/main` and `src/test` (see
128
-
[docs](http://davidb.github.io/scala-maven-plugin/example_cc.html)), so it will only work
128
+
[docs](https://davidb.github.io/scala-maven-plugin/example_cc.html)), so it will only work
129
129
from within certain submodules that have that structure.
130
130
131
131
* you'll typically need to run `mvn install` from the project root for compilation within
@@ -159,7 +159,7 @@ Configure the JVM options for SBT in `.jvmopts` at the project root, for example
159
159
-Xmx2g
160
160
-XX:ReservedCodeCacheSize=512m
161
161
162
-
For the meanings of these two options, please carefully read the [Setting up Maven's Memory Usage section](http://spark.apache.org/docs/latest/building-spark.html#setting-up-mavens-memory-usage).
162
+
For the meanings of these two options, please carefully read the [Setting up Maven's Memory Usage section](https://spark.apache.org/docs/latest/building-spark.html#setting-up-mavens-memory-usage).
163
163
164
164
## Speeding up Compilation
165
165
@@ -238,8 +238,8 @@ The run-tests script also can be limited to a specific Python version or a speci
238
238
239
239
To run the SparkR tests you will need to install the [knitr](https://cran.r-project.org/package=knitr), [rmarkdown](https://cran.r-project.org/package=rmarkdown), [testthat](https://cran.r-project.org/package=testthat), [e1071](https://cran.r-project.org/package=e1071) and [survival](https://cran.r-project.org/package=survival) packages first:
240
240
241
-
R -e "install.packages(c('knitr', 'rmarkdown', 'devtools', 'e1071', 'survival'), repos='http://cran.us.r-project.org')"
242
-
R -e "devtools::install_version('testthat', version = '1.0.2', repos='http://cran.us.r-project.org')"
241
+
R -e "install.packages(c('knitr', 'rmarkdown', 'devtools', 'e1071', 'survival'), repos='https://cloud.r-project.org/')"
242
+
R -e "devtools::install_version('testthat', version = '1.0.2', repos='https://cloud.r-project.org/')"
243
243
244
244
You can run just the SparkR tests using the command:
The Python packaging for Spark is not intended to replace all of the other use cases. This Python packaged version of Spark is suitable for interacting with an existing cluster (be it Spark standalone, YARN, or Mesos) - but does not contain the tools required to set up your own standalone Spark cluster. You can download the full version of Spark from the [Apache Spark downloads page](http://spark.apache.org/downloads.html).
25
+
The Python packaging for Spark is not intended to replace all of the other use cases. This Python packaged version of Spark is suitable for interacting with an existing cluster (be it Spark standalone, YARN, or Mesos) - but does not contain the tools required to set up your own standalone Spark cluster. You can download the full version of Spark from the [Apache Spark downloads page](https://spark.apache.org/downloads.html).
26
26
27
27
28
28
**NOTE:** If you are using this with a Spark standalone cluster you must ensure that the version (including minor version) matches or you may experience odd errors.
0 commit comments