Pod data backup: Difference between revisions

From diaspora* project wiki
(→‎Collecting pod data: simplify beginning of the script)
m (diaspora.toml)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
= Collecting pod data =
== Collecting pod data ==


You should do backups of all user data, that is the whole database and uploaded images. For the images just make copies of the <tt>public/uploads</tt> directory. Then just dump the <tt>diaspora_production</tt> database from your database server.
You should do backups of all user data, that is the whole database and uploaded images. For the images just make copies of the <tt>public/uploads</tt> directory. Then just dump the <tt>diaspora_production</tt> database from your database server.


You may find the following Bash script useful for collecting your pod data and packing it into archive. It dumps your database using diaspora database configuration information, picks <tt>public/uploads</tt> and <tt>config/diaspora.yml</tt> and packs that all into a single tar file. The presented version only supports PostgreSQL for now, but MySQL support could easily be added. The script expects to be placed in a subfolder of diaspora installation (e.g. <tt>script</tt>). If placed somewhere else it would fail with "Unexpected directory" error. In case of correct run the script will create an archive with pod data and file name of <tt>diaspora_hostname_date.tar</tt> pattern.
You may find the following Bash script useful for collecting your pod data and packing it into archive. It dumps your database using diaspora database configuration information, picks <tt>public/uploads</tt> and <tt>config/diaspora.toml</tt> and packs that all into a single tar file. The presented version only supports PostgreSQL for now, but MySQL support could easily be added. The script expects to be placed in a subfolder of diaspora installation (e.g. <tt>script</tt>). If placed somewhere else it would fail with "Unexpected directory" error. In case of correct run the script will create an archive with pod data and file name of <tt>diaspora_hostname_date.tar</tt> pattern.


  #!/bin/bash
  #!/bin/bash
Line 49: Line 49:
   
   
  echo "Creating archive $ARCHIVE_FILE_NAME..."
  echo "Creating archive $ARCHIVE_FILE_NAME..."
  tar -cf $ARCHIVE_FILE_NAME $DUMP_FILE_NAME.sql.gz public/uploads config/diaspora.yml
  tar -cf $ARCHIVE_FILE_NAME $DUMP_FILE_NAME.sql.gz public/uploads config/diaspora.toml
   
   
  status=$?
  status=$?

Latest revision as of 23:09, 9 June 2024

Collecting pod data

You should do backups of all user data, that is the whole database and uploaded images. For the images just make copies of the public/uploads directory. Then just dump the diaspora_production database from your database server.

You may find the following Bash script useful for collecting your pod data and packing it into archive. It dumps your database using diaspora database configuration information, picks public/uploads and config/diaspora.toml and packs that all into a single tar file. The presented version only supports PostgreSQL for now, but MySQL support could easily be added. The script expects to be placed in a subfolder of diaspora installation (e.g. script). If placed somewhere else it would fail with "Unexpected directory" error. In case of correct run the script will create an archive with pod data and file name of diaspora_hostname_date.tar pattern.

#!/bin/bash

# ensure right directory
cd $(dirname $0)/..

if [ "$(cat .ruby-gemset)" != "diaspora" ]; then
  echo "Unexpected directory"
  exit 1
fi

DBCONFIG=$( ruby -r yaml -e "puts YAML.load_file('config/database.yml')['production'].values_at('adapter','host','port','username','password','database','encoding').join(';')" )
IFS=';' read -a DBCONFIG_ARRAY <<< "$DBCONFIG"

ADAPTER=${DBCONFIG_ARRAY[0]}
HOST=${DBCONFIG_ARRAY[1]}
PORT=${DBCONFIG_ARRAY[2]}
USERNAME=${DBCONFIG_ARRAY[3]}
PASSWORD=${DBCONFIG_ARRAY[4]}
DATABASE=${DBCONFIG_ARRAY[5]}
ENCODING=${DBCONFIG_ARRAY[6]}

DATE=$( date +%F_%H-%M-%S )
DUMP_FILE_NAME=${ADAPTER}_${DATABASE}_${DATE}
ARCHIVE_FILE_NAME=diaspora_$(hostname)_${DATE}.tar

case $ADAPTER in
  postgresql)
    echo "Running PostgreSQL dump..."
    TMP_FILE=$( mktemp )
    env PGPASSWORD=$PASSWORD pg_dump -h $HOST -p $PORT -U $USERNAME -E $ENCODING $DATABASE > $TMP_FILE
    status=$?
    if [ $status -ne 0 ]; then
      echo "Error running dump command: $status"
      exit $status
    fi
    gzip -n $TMP_FILE
    mv $TMP_FILE.gz $DUMP_FILE_NAME.sql.gz
    ;;
  *)
    echo "Unknown adapter $ADAPTER"
    exit 1
esac

echo "Creating archive $ARCHIVE_FILE_NAME..."
tar -cf $ARCHIVE_FILE_NAME $DUMP_FILE_NAME.sql.gz public/uploads config/diaspora.toml

status=$?
if [ $status -eq 0 ]; then
  echo "Successfully created archive $ARCHIVE_FILE_NAME."
else
  echo "Error $status running tar. Archive wasn't created."
fi

unlink $DUMP_FILE_NAME.sql.gz
exit $status