A simple, interactive way to surface a community’s culture. We want to make it easy for people to share ideas and events with each other. Play with it here
Probably the best way to explain would be to go to the site and text the number in the top left of the screen. You'll be sent some information about using the service - as of May 1, 2017 it said the following:
WELCOME TO FUTUREBOARD
Thanks for the text! To use FUTUREBOARD, write me words or feed me a link (I'll read anything from Youtube, Vimeo or URLs that end in .jpg, .png, and of course .gif).
Got some feedback? Email [email protected] :)
As mentioned, we currently handle some video links and common image formats.
This project is licensed under the MIT License, a "short and simple permissive license with conditions only requiring preservation of copyright and license notices."
The board does not support any native content from mobile (such as sending images or videos in MMS). It'd also be nice to include links to articles, however iframe
s can be tricky when it comes to cross-origin content. One way around this could be scraping articles for their title and text to display here.
Two themes of interest from the final review were location as a dimension and threads. If we use different numbers for each display, we can determine where somebody is interacting with a board which could be an interesting dimension to explore in the future. We also want to explore an interface for adding information to items on the board to create threads that others could access later. This could involve uploading links, images and videos all related to a certain event or theme. There was also interest in seeing a generally more organized representation of the information in FUTUREBOARD.
Thank you to Emily, Jeff and Oliver from the library along with the HtL students for all of their input over the course of this project. This project was started by Sean and Aidan but we hope to see it built upon by others in the near future :)
To install your own version of FUTUREBOARD: fork this repo, clone it so you have local access, enter the directory (cd futureboard
) and then run pip install -r requirements.txt
(virtual environment recommended). This project was deployed with Heroku using a Mongo database hosted by mLab. The texting interface was implemented using Twilio, so if you'd like to run your own version of the site, you'll need to make an account with each of these services (except mLab if you create a mongo instance through Heroku). As a heads up, Twilio costs $1 per phone number and $0.0075 per SMS sent or received, for MMS it's $0.01 to receive and $0.02 to send.
Once you have accounts with these services, you'll need a phone number that can handle SMS (and one day MMS!) from Twilio and a free mLab instance (sandbox). Our app is set to access these services using keys stored as environment variables. You'll need to run the following code with each variable replaced with your real key.
export MONGODB_URI="<Get this from mLab instance>";
export TWILIO_ACCOUNT_SID="<Get this from Twilio dashboard>";
export TWILIO_AUTH_TOKEN="<Get this from Twilio dashboard>";
Once you've set these environment variables, you should be able to run the app locally by executing python -m app.server
from the root of the repo directory. This will serve the app at http://localhost:5000.
NOTE: You'll need to add these as environment variables to your Heroku instance as well. Depending on your setup, you may want to have a separate database for local vs. production, but for just getting the app running it's not a crime to use the same.
Now that you've got these variables set, you need to get the Heroku toolbelt set up. Once logged in through the command line interface, run heroku git:remote -a <YOUR PROJECT NAME>
. Now when you've committed changes to Github, you can push to Heroku by running git push heroku master
. If everything was done correctly, your app should deploy and you can access it at <YOUR PROJECT NAME>.herokuapp.com
.
A tricky part about working with Twilio is that you need a publicly accessible URL for them to send texts to, so you need to have some form of staging server or tunneling software like Ngrok (tutorial for using Ngrok with Twilio). The latter is recommended as it makes development more natural (test changes without new deployment) but is less permanent than a staging server managed by Heroku. A combination is ideal but extra work, that's all up to you!
As far as database administration, a pro of using mLab is having an interface for sifting through db records and doing general management rather than through the command line.
Setup is very similar to the standalone setup shown above however you can just clone the repository directly. The added complication however is that you'll need to be added to the various services by somebody who's worked on this project before. If you have any questions, feel free to reach out to [email protected]. You would then use the variables of the main project for connecting to staging database and server. We have a pipeline set up in Heroku that allows us to work on staging until we feel good about it and then easily push those changes to the main server.
Directory Structure provides a roadmap to the modules and their functions. Once you are familiar with the layout, check out the to-do list, and then do some coding!
Submit a pull request detailing what you did and why. Once the code has been reviewed, the problem will be removed from the to-do list, and your feature will be merged into master and pushed to production.
- Implement change log
- Spread the gospel of FUTUREBOARD
- Remove obsolete files and directories
- Investigate why certain links don't render correctly
- At one point, a bunch of records seemed to disappear. Investigate if this is a DB issue or a matter of date filtering gone wrong.
- Address issues opened by osteele:
- Validate SMS Senders, messages can currently come from anybody - not just Olin community
- Document or automate local MongoDB for development
scrape.js
uses casperjs to scrape email data from the CarpeDiem archives and throw them into text files in the /data
directory, which is ignored by git.
wrangle.py
pulls the data from the /data
directory, parses out critical information, and throws those into a JSON which is then stored in /parsed_data
, which is ignored by git.
fetch_emails.py
attempts to pull new emails from the CarpeBot gmail account. Currently does not work very well; it will use the gmail account that is currently logged in on the computer. Alternatives should be explored in the future.
All server files are stored in the /app
directory.
server.py
runs a Flask server. If built directly from the command line, it will default to running on localhost:5000
; if run within a Heroku app, it will use the settings of that app. It requires the MONGODB_URI
environment variable to be set as described in the operators section.
models.py
is responsible for taking the email JSONs in the /parsed_data
directory and pushing them to a remote database. This module also requires the MONGODB_URI
environment variable to be set.
factory.py
sets basic configurations for the Flask app.
Html templates are stored in /app/templates
, and /app/static
contains the CSS, fonts, and JavaScript required to render the board.
layout.html
contains styling for the whole site.
board.html
extends layout.html
and runs the custom scripts required to use the main board.
Other templates are no longer used and should be removed in future boards.
To be implemented
This project includes utilities for acquiring Olin mailing list data. If you'd like to access CarpeDiem emails, you'll need an Olin email and a subscription to the list. Once you've signed up, follow the instructions in the scrape.js
file:
To use, install casperjs (npm install -g casperjs) and run:
casperjs scrape.js <arg1> <arg2>
where arg1 is your email for carpe and arg2 is your password for carpe.
If you don't think you have a password for carpe, you do but it's randomly
generated. You'll need to visit the carpediem list at lists.olin.edu to
get it reset.
After that, run python wrangle.py
which will do a rough parse of all the scraped data and make it accessible to the web app. Finally, run models.py
to add the parsed emails to the remote database your app will use. Note that models.py
resets the database; see the operators section for information on database setup.
If you intend to use this feature, please know that it is your responsibility to never commit any of this data to a public repository like Github or disclose the identity of Olin students if creating a public app. Please be respectful of our community and its data!