3.7 KiB
3.7 KiB
title | updated | created |
---|---|---|
postgresql | 2021-05-04 14:58:11Z | 2021-05-04 14:58:11Z |
Postgresql
psql DBNAME USERNAME \d \ list all relations \d tablename \ ddl
sudo -u postgres CREATEUSER <username>
sudo -u postgres CREATEDB <datebasename>
psql# ALTER USER <username> WITH ENCRYPTED PASSWORD '<password>';
psql# GRANT ALL PRIVILEGES ON DATABASE <databasename> to <username>;
create csv file
COPY (SELECT * FROM "public".empbase) TO '/tmp/empbase.csv' WITH CSV header;
Jupyter:
first install: ipython-sql and psycopg2
* %load_ext sql
* %sql postgresql://john:****@localhost/testdb
* %%sql
- select * from aap;
From python program
import psycopg2
query = "select * from aap"
try:
conn = psycopg2.connect("postgres://john:qw12aap@localhost:5432/testdb")
except psycopg2.OperationalError as e:
print('Unable to connect!\n{0}').format(e)
sys.exit(1)
finally:
print("connected")
cur = conn.cursor()
cur.execute(query)
for x in cur.fetchall():
print(x)
cur.close()
conn.close()
Connection from Apache Spark example:
val driver = "org.postgresql.Driver"
Class.forName(driver)
val df = spark.sqlContext
.read
.format("jdbc")
.options(Map("url"->"jdbc:postgresql://172.17.0.2:5432/postgres", "user"->"postgres","password"->"qw12aap","driver"->driver,"dbtable"->"company"))
.load()
Install Docker
https://hackernoon.com/dont-install-postgres-docker-pull-postgres-bee20e200198 https://docs.databricks.com/spark/latest/data-sources/sql-databases.html
create persistant storage location
mkdir -p $HOME/docker/volumes/postgres
launch docker container
docker run --rm --name pg-docker -e POSTGRES_PASSWORD=docker -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres
connect to running container
docker exec -it pg-docker /bin/bash
inside container
psql -h localhost -U postgres -d postgres
notes
psql DBNAME USERNAME
\d \ list all relations \d tablename \ ddl
sudo -u postgres CREATEUSER sudo -u postgres CREATEDB
psql# ALTER USER WITH ENCRYPTED PASSWORD ''; psql# GRANT ALL PRIVILEGES ON DATABASE to ;
create csv file:
COPY (SELECT * FROM "public".empbase) TO '/tmp/empbase.csv' WITH CSV header;
In jupyter
first install: ipython-sql and psycopg2
Use:
- %load_ext sql
- %sql postgresql://john:****@localhost/testdb
- %%sql
- select * from aap;
From python program
import psycopg2
query = "select * from aap"
try:
conn = psycopg2.connect("postgres://john:qw12aap@localhost:5432/testdb")
except psycopg2.OperationalError as e:
print('Unable to connect!\n{0}').format(e)
sys.exit(1)
finally:
print("connected")
cur = conn.cursor()
cur.execute(query)
for x in cur.fetchall():
print(x)
cur.close()
conn.close()
sources: https://www.postgresql.org/files/documentation/pdf/11/postgresql-11-A4.pdf
Connection from Apache Spark example:
val driver = "org.postgresql.Driver"
Class.forName(driver)
val df = spark.sqlContext
.read
.format("jdbc")
.options(Map("url"->"jdbc:postgresql://172.17.0.2:5432/postgres", "user"->"postgres","password"->"qw12aap","driver"->driver,"dbtable"->"company"))
.load()