You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

86 lines
3.5 KiB

7 years ago
7 years ago
8 years ago
8 years ago
  1. Mastodon
  2. ========
  3. [![Build Status](http://img.shields.io/travis/Gargron/goldfinger.svg)][travis]
  4. [![Code Climate](https://img.shields.io/codeclimate/github/Gargron/mastodon.svg)][code_climate]
  5. [travis]: https://travis-ci.org/Gargron/mastodon
  6. [code_climate]: https://codeclimate.com/github/Gargron/mastodon
  7. Mastodon is a federated microblogging engine. An alternative implementation of the GNU Social project. Based on ActivityStreams, Webfinger, PubsubHubbub and Salmon.
  8. Focus of the project on a clean REST API and a good user interface. Ruby on Rails is used for the back-end, while React.js and Redux are used for the dynamic front-end. A static front-end for public resources (profiles and statuses) is also provided.
  9. If you would like, you can [support the development of this project on Patreon][patreon].
  10. [patreon]: https://www.patreon.com/user?u=619786
  11. **Current status of the project is early development**
  12. ## Resources
  13. - [API overview](https://github.com/Gargron/mastodon/wiki/API)
  14. - [How to use the API via cURL/oAuth](https://github.com/Gargron/mastodon/wiki/Testing-with-cURL)
  15. ## Status
  16. - GNU Social users can follow Mastodon users
  17. - Mastodon users can follow GNU Social users
  18. - Retweets, favourites, mentions, replies work in both directions
  19. - Public pages for profiles and single statuses
  20. - Sign up, login, forgotten passwords and changing password
  21. - Mentions and URLs converted to links in statuses
  22. - REST API, including home and mention timelines
  23. - OAuth2 provider system for the API
  24. - Upload header image for profile page
  25. - Deleting statuses, deletion propagation
  26. - Real-time timelines via Websockets
  27. ## Configuration
  28. - `LOCAL_DOMAIN` should be the domain/hostname of your instance. This is **absolutely required** as it is used for generating unique IDs for everything federation-related
  29. - `LOCAL_HTTPS` set it to `true` if HTTPS works on your website. This is used to generate canonical URLs, which is also important when generating and parsing federation-related IDs
  30. - `HUB_URL` should be the URL of the PubsubHubbub service that your instance is going to use. By default it is the open service of Superfeedr
  31. Consult the example configuration file, `.env.production.sample` for the full list.
  32. ## Requirements
  33. - PostgreSQL
  34. - Redis
  35. ## Running with Docker and Docker-Compose
  36. The project now includes a `Dockerfile` and a `docker-compose.yml`. You need to turn `.env.production.sample` into `.env.production` with all the variables set before you can:
  37. docker-compose build
  38. And finally
  39. docker-compose up -d
  40. As usual, the first thing you would need to do would be to run migrations:
  41. docker-compose run web rake db:migrate
  42. And since the instance running in the container will be running in production mode, you need to pre-compile assets:
  43. docker-compose run web rake assets:precompile
  44. The container has two volumes, for the assets and for user uploads. The default docker-compose.yml maps them to the repository's `public/assets` and `public/system` directories, you may wish to put them somewhere else. Likewise, the PostgreSQL and Redis images have data containers that you may wish to map somewhere where you know how to find them and back them up.
  45. ### Updating
  46. This approach makes updating to the latest version a real breeze.
  47. git pull
  48. To pull down the updates, re-run
  49. docker-compose build
  50. And finally,
  51. docker-compose up -d
  52. Which will re-create the updated containers, leaving databases and data as is. Depending on what files have been updated, you might need to re-run migrations and asset compilation.