Skip to content

Commit eed6de1

Browse files
srowendongjoon-hyun
authored andcommitted
[MINOR][DOCS] Tighten up some key links to the project and download pages to use HTTPS
## What changes were proposed in this pull request? Tighten up some key links to the project and download pages to use HTTPS ## How was this patch tested? N/A Closes #24665 from srowen/HTTPSURLs. Authored-by: Sean Owen <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
1 parent 4d64ed8 commit eed6de1

File tree

9 files changed

+34
-34
lines changed

9 files changed

+34
-34
lines changed

.github/PULL_REQUEST_TEMPLATE

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,4 @@
77
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
88
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
99

10-
Please review http://spark.apache.org/contributing.html before opening a pull request.
10+
Please review https://spark.apache.org/contributing.html before opening a pull request.

CONTRIBUTING.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
## Contributing to Spark
22

33
*Before opening a pull request*, review the
4-
[Contributing to Spark guide](http://spark.apache.org/contributing.html).
4+
[Contributing to Spark guide](https://spark.apache.org/contributing.html).
55
It lists steps that are required before creating a PR. In particular, consider:
66

77
- Is the change important and ready enough to ask the community to spend time reviewing?
88
- Have you searched for existing, related JIRAs and pull requests?
9-
- Is this a new feature that can stand alone as a [third party project](http://spark.apache.org/third-party-projects.html) ?
9+
- Is this a new feature that can stand alone as a [third party project](https://spark.apache.org/third-party-projects.html) ?
1010
- Is the change being proposed clearly explained and motivated?
1111

1212
When you contribute code, you affirm that the contribution is your original work and that you

R/README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ export R_HOME=/home/username/R
1717

1818
#### Build Spark
1919

20-
Build Spark with [Maven](http://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
20+
Build Spark with [Maven](https://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
2121

2222
```bash
2323
build/mvn -DskipTests -Psparkr package
@@ -35,15 +35,15 @@ SparkContext, you can run
3535

3636
./bin/sparkR --master "local[2]"
3737

38-
To set other options like driver memory, executor memory etc. you can pass in the [spark-submit](http://spark.apache.org/docs/latest/submitting-applications.html) arguments to `./bin/sparkR`
38+
To set other options like driver memory, executor memory etc. you can pass in the [spark-submit](https://spark.apache.org/docs/latest/submitting-applications.html) arguments to `./bin/sparkR`
3939

4040
#### Using SparkR from RStudio
4141

4242
If you wish to use SparkR from RStudio, please refer [SparkR documentation](https://spark.apache.org/docs/latest/sparkr.html#starting-up-from-rstudio).
4343

4444
#### Making changes to SparkR
4545

46-
The [instructions](http://spark.apache.org/contributing.html) for making contributions to Spark also apply to SparkR.
46+
The [instructions](https://spark.apache.org/contributing.html) for making contributions to Spark also apply to SparkR.
4747
If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using `R/install-dev.sh` and test your changes.
4848
Once you have made your changes, please include unit tests for them and run existing unit tests using the `R/run-tests.sh` script as described below.
4949

@@ -58,7 +58,7 @@ To run one of them, use `./bin/spark-submit <filename> <args>`. For example:
5858
```bash
5959
./bin/spark-submit examples/src/main/r/dataframe.R
6060
```
61-
You can run R unit tests by following the instructions under [Running R Tests](http://spark.apache.org/docs/latest/building-spark.html#running-r-tests).
61+
You can run R unit tests by following the instructions under [Running R Tests](https://spark.apache.org/docs/latest/building-spark.html#running-r-tests).
6262

6363
### Running on YARN
6464

R/WINDOWS.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -20,19 +20,19 @@ license: |
2020

2121
To build SparkR on Windows, the following steps are required
2222

23-
1. Install R (>= 3.1) and [Rtools](http://cran.r-project.org/bin/windows/Rtools/). Make sure to
23+
1. Install R (>= 3.1) and [Rtools](https://cloud.r-project.org/bin/windows/Rtools/). Make sure to
2424
include Rtools and R in `PATH`. Note that support for R prior to version 3.4 is deprecated as of Spark 3.0.0.
2525

2626
2. Install
27-
[JDK8](http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) and set
27+
[JDK8](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) and set
2828
`JAVA_HOME` in the system environment variables.
2929

30-
3. Download and install [Maven](http://maven.apache.org/download.html). Also include the `bin`
30+
3. Download and install [Maven](https://maven.apache.org/download.html). Also include the `bin`
3131
directory in Maven in `PATH`.
3232

33-
4. Set `MAVEN_OPTS` as described in [Building Spark](http://spark.apache.org/docs/latest/building-spark.html).
33+
4. Set `MAVEN_OPTS` as described in [Building Spark](https://spark.apache.org/docs/latest/building-spark.html).
3434

35-
5. Open a command shell (`cmd`) in the Spark directory and build Spark with [Maven](http://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
35+
5. Open a command shell (`cmd`) in the Spark directory and build Spark with [Maven](https://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
3636

3737
```bash
3838
mvn.cmd -DskipTests -Psparkr package
@@ -52,7 +52,7 @@ To run the SparkR unit tests on Windows, the following steps are required —ass
5252

5353
4. Set the environment variable `HADOOP_HOME` to the full path to the newly created `hadoop` directory.
5454

55-
5. Run unit tests for SparkR by running the command below. You need to install the needed packages following the instructions under [Running R Tests](http://spark.apache.org/docs/latest/building-spark.html#running-r-tests) first:
55+
5. Run unit tests for SparkR by running the command below. You need to install the needed packages following the instructions under [Running R Tests](https://spark.apache.org/docs/latest/building-spark.html#running-r-tests) first:
5656

5757
```
5858
.\bin\spark-submit2.cmd --conf spark.hadoop.fs.defaultFS="file:///" R\pkg\tests\run-all.R

README.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ rich set of higher-level tools including Spark SQL for SQL and DataFrames,
77
MLlib for machine learning, GraphX for graph processing,
88
and Structured Streaming for stream processing.
99

10-
<http://spark.apache.org/>
10+
<https://spark.apache.org/>
1111

1212
[![Jenkins Build](https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7/badge/icon)](https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7)
1313
[![AppVeyor Build](https://img.shields.io/appveyor/ci/ApacheSoftwareFoundation/spark/master.svg?style=plastic&logo=appveyor)](https://ci.appveyor.com/project/ApacheSoftwareFoundation/spark)
@@ -17,12 +17,12 @@ and Structured Streaming for stream processing.
1717
## Online Documentation
1818

1919
You can find the latest Spark documentation, including a programming
20-
guide, on the [project web page](http://spark.apache.org/documentation.html).
20+
guide, on the [project web page](https://spark.apache.org/documentation.html).
2121
This README file only contains basic setup instructions.
2222

2323
## Building Spark
2424

25-
Spark is built using [Apache Maven](http://maven.apache.org/).
25+
Spark is built using [Apache Maven](https://maven.apache.org/).
2626
To build Spark and its example programs, run:
2727

2828
build/mvn -DskipTests clean package
@@ -31,9 +31,9 @@ To build Spark and its example programs, run:
3131

3232
You can build Spark using more than one thread by using the -T option with Maven, see ["Parallel builds in Maven 3"](https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3).
3333
More detailed documentation is available from the project site, at
34-
["Building Spark"](http://spark.apache.org/docs/latest/building-spark.html).
34+
["Building Spark"](https://spark.apache.org/docs/latest/building-spark.html).
3535

36-
For general development tips, including info on developing Spark using an IDE, see ["Useful Developer Tools"](http://spark.apache.org/developer-tools.html).
36+
For general development tips, including info on developing Spark using an IDE, see ["Useful Developer Tools"](https://spark.apache.org/developer-tools.html).
3737

3838
## Interactive Scala Shell
3939

@@ -83,7 +83,7 @@ can be run using:
8383
./dev/run-tests
8484

8585
Please see the guidance on how to
86-
[run tests for a module, or individual tests](http://spark.apache.org/developer-tools.html#individual-tests).
86+
[run tests for a module, or individual tests](https://spark.apache.org/developer-tools.html#individual-tests).
8787

8888
There is also a Kubernetes integration test, see resource-managers/kubernetes/integration-tests/README.md
8989

@@ -94,16 +94,16 @@ storage systems. Because the protocols have changed in different versions of
9494
Hadoop, you must build Spark against the same version that your cluster runs.
9595

9696
Please refer to the build documentation at
97-
["Specifying the Hadoop Version and Enabling YARN"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn)
97+
["Specifying the Hadoop Version and Enabling YARN"](https://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version-and-enabling-yarn)
9898
for detailed guidance on building for a particular distribution of Hadoop, including
9999
building for particular Hive and Hive Thriftserver distributions.
100100

101101
## Configuration
102102

103-
Please refer to the [Configuration Guide](http://spark.apache.org/docs/latest/configuration.html)
103+
Please refer to the [Configuration Guide](https://spark.apache.org/docs/latest/configuration.html)
104104
in the online documentation for an overview on how to configure Spark.
105105

106106
## Contributing
107107

108-
Please review the [Contribution to Spark guide](http://spark.apache.org/contributing.html)
108+
Please review the [Contribution to Spark guide](https://spark.apache.org/contributing.html)
109109
for information on how to get started contributing to the project.

build/sbt-launch-lib.bash

+2-2
Original file line numberDiff line numberDiff line change
@@ -56,13 +56,13 @@ acquire_sbt_jar () {
5656
wget --quiet ${URL1} -O "${JAR_DL}" &&\
5757
mv "${JAR_DL}" "${JAR}"
5858
else
59-
printf "You do not have curl or wget installed, please install sbt manually from http://www.scala-sbt.org/\n"
59+
printf "You do not have curl or wget installed, please install sbt manually from https://www.scala-sbt.org/\n"
6060
exit -1
6161
fi
6262
fi
6363
if [ ! -f "${JAR}" ]; then
6464
# We failed to download
65-
printf "Our attempt to download sbt locally to ${JAR} failed. Please install sbt manually from http://www.scala-sbt.org/\n"
65+
printf "Our attempt to download sbt locally to ${JAR} failed. Please install sbt manually from https://www.scala-sbt.org/\n"
6666
exit -1
6767
fi
6868
printf "Launching sbt from ${JAR}\n"

dev/create-release/vote.tmpl

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ a minimum of 3 +1 votes.
66
[ ] +1 Release this package as Apache Spark {version}
77
[ ] -1 Do not release this package because ...
88

9-
To learn more about Apache Spark, please see http://spark.apache.org/
9+
To learn more about Apache Spark, please see https://spark.apache.org/
1010

1111
The tag to be voted on is {tag} (commit {tag_commit}):
1212
https://github.com/apache/spark/tree/{tag}

docs/building-spark.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ You can fix these problems by setting the `MAVEN_OPTS` variable as discussed bef
5151

5252
### build/mvn
5353

54-
Spark now comes packaged with a self-contained Maven installation to ease building and deployment of Spark from source located under the `build/` directory. This script will automatically download and setup all necessary build requirements ([Maven](https://maven.apache.org/), [Scala](http://www.scala-lang.org/), and [Zinc](https://github.com/typesafehub/zinc)) locally within the `build/` directory itself. It honors any `mvn` binary if present already, however, will pull down its own copy of Scala and Zinc regardless to ensure proper version requirements are met. `build/mvn` execution acts as a pass through to the `mvn` call allowing easy transition from previous build methods. As an example, one can build a version of Spark as follows:
54+
Spark now comes packaged with a self-contained Maven installation to ease building and deployment of Spark from source located under the `build/` directory. This script will automatically download and setup all necessary build requirements ([Maven](https://maven.apache.org/), [Scala](https://www.scala-lang.org/), and [Zinc](https://github.com/typesafehub/zinc)) locally within the `build/` directory itself. It honors any `mvn` binary if present already, however, will pull down its own copy of Scala and Zinc regardless to ensure proper version requirements are met. `build/mvn` execution acts as a pass through to the `mvn` call allowing easy transition from previous build methods. As an example, one can build a version of Spark as follows:
5555

5656
./build/mvn -DskipTests clean package
5757

@@ -125,7 +125,7 @@ should run continuous compilation (i.e. wait for changes). However, this has not
125125
extensively. A couple of gotchas to note:
126126

127127
* it only scans the paths `src/main` and `src/test` (see
128-
[docs](http://davidb.github.io/scala-maven-plugin/example_cc.html)), so it will only work
128+
[docs](https://davidb.github.io/scala-maven-plugin/example_cc.html)), so it will only work
129129
from within certain submodules that have that structure.
130130

131131
* you'll typically need to run `mvn install` from the project root for compilation within
@@ -159,7 +159,7 @@ Configure the JVM options for SBT in `.jvmopts` at the project root, for example
159159
-Xmx2g
160160
-XX:ReservedCodeCacheSize=512m
161161

162-
For the meanings of these two options, please carefully read the [Setting up Maven's Memory Usage section](http://spark.apache.org/docs/latest/building-spark.html#setting-up-mavens-memory-usage).
162+
For the meanings of these two options, please carefully read the [Setting up Maven's Memory Usage section](https://spark.apache.org/docs/latest/building-spark.html#setting-up-mavens-memory-usage).
163163

164164
## Speeding up Compilation
165165

@@ -238,8 +238,8 @@ The run-tests script also can be limited to a specific Python version or a speci
238238

239239
To run the SparkR tests you will need to install the [knitr](https://cran.r-project.org/package=knitr), [rmarkdown](https://cran.r-project.org/package=rmarkdown), [testthat](https://cran.r-project.org/package=testthat), [e1071](https://cran.r-project.org/package=e1071) and [survival](https://cran.r-project.org/package=survival) packages first:
240240

241-
R -e "install.packages(c('knitr', 'rmarkdown', 'devtools', 'e1071', 'survival'), repos='http://cran.us.r-project.org')"
242-
R -e "devtools::install_version('testthat', version = '1.0.2', repos='http://cran.us.r-project.org')"
241+
R -e "install.packages(c('knitr', 'rmarkdown', 'devtools', 'e1071', 'survival'), repos='https://cloud.r-project.org/')"
242+
R -e "devtools::install_version('testthat', version = '1.0.2', repos='https://cloud.r-project.org/')"
243243

244244
You can run just the SparkR tests using the command:
245245

python/README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -7,22 +7,22 @@ rich set of higher-level tools including Spark SQL for SQL and DataFrames,
77
MLlib for machine learning, GraphX for graph processing,
88
and Structured Streaming for stream processing.
99

10-
<http://spark.apache.org/>
10+
<https://spark.apache.org/>
1111

1212
## Online Documentation
1313

1414
You can find the latest Spark documentation, including a programming
15-
guide, on the [project web page](http://spark.apache.org/documentation.html)
15+
guide, on the [project web page](https://spark.apache.org/documentation.html)
1616

1717

1818
## Python Packaging
1919

2020
This README file only contains basic information related to pip installed PySpark.
2121
This packaging is currently experimental and may change in future versions (although we will do our best to keep compatibility).
2222
Using PySpark requires the Spark JARs, and if you are building this from source please see the builder instructions at
23-
["Building Spark"](http://spark.apache.org/docs/latest/building-spark.html).
23+
["Building Spark"](https://spark.apache.org/docs/latest/building-spark.html).
2424

25-
The Python packaging for Spark is not intended to replace all of the other use cases. This Python packaged version of Spark is suitable for interacting with an existing cluster (be it Spark standalone, YARN, or Mesos) - but does not contain the tools required to set up your own standalone Spark cluster. You can download the full version of Spark from the [Apache Spark downloads page](http://spark.apache.org/downloads.html).
25+
The Python packaging for Spark is not intended to replace all of the other use cases. This Python packaged version of Spark is suitable for interacting with an existing cluster (be it Spark standalone, YARN, or Mesos) - but does not contain the tools required to set up your own standalone Spark cluster. You can download the full version of Spark from the [Apache Spark downloads page](https://spark.apache.org/downloads.html).
2626

2727

2828
**NOTE:** If you are using this with a Spark standalone cluster you must ensure that the version (including minor version) matches or you may experience odd errors.

0 commit comments

Comments
 (0)