Showing posts with label Databases. Show all posts
Showing posts with label Databases. Show all posts

Saturday, 19 July 2014

MySQL and Oracle command equivalents

MySQL has specific commands, which provides the easy access to the information_schema database, to get the schema level details. But oracle does not provide such easy access to some of the schema level meta data.

Here are  some MySQL specific commands/Syntaxes & equivalent Oracle techniques:

To get the list of databases

MySQL :
show databases
Oracle :
SELECT username FROM all_users ORDER BY username;

To get the current schema

MySQL :
select DATABASE();
Oracle :
SELECT sys_context('USERENV', 'CURRENT_SCHEMA') FROM dual;

To get the list of tables within the current database

MySQL :
use database_name;
show tables;
Oracle :
select * from user_tables;
Here schema is based on the connected username, so it is selected during the creation of the connection.
USER_TABLES will have a row for every table in your schema. If you are looking for the tables in your schema, this would be the correct query. If you are looking for the tables in some other schema, this is not the right table to use.

ALL_TABLES will have a row for every table you have access to regardless of schema. You would, presumably, want to qualify the query by specifying the name of the schema you are interested in, i.e.
SELECT table_name
  FROM all_tables
 WHERE owner = <<name of schema>>
Of course, that assumes that you have at least SELECT access on every table in that schema. If that is not the case, then you would need to use DBA_TABLES (which would require that the DBA grant you access to that table), i.e.
SELECT table_name
  FROM dba_tables
 WHERE owner = <<name of schema>>

To get the connected connection info

MySQL :
show processlist
Oracle :
SELECT sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER';

To limit the selection

MySQL :
select * from user limit 10;
Oracle :
select * from table_name where ROWNUM <= 10;
To select rows which is somewhere middle

MySQL :
select username from user limit 10, 15;
Oracle :
select element_name from (select element_name, ROWNUM as row_number from table_name) as t1 where t1.row_number > 10 and t1.row_number <= 15; 
Note: Here we have to use SubQuery rather than call it directly as "select element_name from table_name as t1 where ROWNUM > 10 and ROWNUM <= 15;". This cannot be done as these ROWNUMs are assigned once they are satisfied the given conditions, which follows the WHERE. Since condition "ROWNUM > 10 and ROWNUM <= 15" will never be satisfied from the start ROWNUMs will never be incremented. So we need to use the Subqueries to let the ROWNUMs assigned within the Subquery and later filter the required results from the outside query.

Describe table has a same syntax in both MySQL & Oracle.
desc table_name;
To view errors/warnings

MySQL :
show warings / show errors
Oracle :
select * from user_errors;/ show errors
MySQL has auto_increment Columns

MySQL :
create table table_name (element_id int AUTO_INCREMENT primary, element_name varchar(20));
Oracle :
i) Create table without the auto_increment keywords (because it does not exist in Oracle)
create table table_name (element_id int primary, element_name varchar(20));
ii) Create a sequence, which provides the incremented values
create sequence auto_incrementor;
iii) Create a trigger, which gets the next value from the sequence and updates it to the column to be auto_incremented
CREATE TRIGGER trig_incrementor BEFORE INSERT ON table_name
FOR EACH ROW
BEGIN
SELECT auto_incrementer.NEXTVAL into :new.element_id FROM dual;
END;
To get the table create script back

MySQL :
show create table table_name;
Oracle :
Make sure that the select_catalog_role is already available for the given user if not assign the role, as shown below.
grant select_catalog_role to [username];Increase the page size and maximum width for displaying the results so that complete table definition can be displayed in the sqlplus console.
set pagesize 999
set long 9000
select dbms_metadata.get_ddl('TABLE', 'TABLE_NAME', 'database_name') from dual;
To get the session variables

MySQL :
show variables; or show variables like 'inno%';
Oracle :
SELECT name, value FROM gv$parameter; or SELECT sys_context('USERENV', ) FROM dual;

Explain the execution plan of a sql statement

MySQL :
explain select * from table_name;
Oracle :

i) First execute the explain plan so that it will fill the plan_table (this table need to be created according to the standard plan_table format, if it does not exist already)
explain plan select * from table_name;
ii) Now the results of the explain plan will be populated in the plan_table, so we need to use a row connecting query to get a readable summary of the results.
select substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,30 ) "Operation", object_name "Object" from plan_table start with id = 0 connect by prior id=parent_id;

Possibly Related Posts

Thursday, 17 July 2014

MySQL and Postgres command equivalents

Task: list existing databases, connect to one of the databases, then list existing tables and finally show the structure of one of the tables.

In Mysql:
show databases;
use database;
show tables;
describe table;
exit
In PostgreSQL:
\l
\c database
\dt
\d+ table
\q

Possibly Related Posts

Saturday, 5 January 2013

PostgreSQL cluster using DRBD and hot standby

Cluster Configuration:

First install all the necessary packages:
yum install gfs2-utils cman fence-virtd-checkpoint lvm2-cluster perl-Net-Telnet rgmanager device-mapper-multipath ipvsadm piranha luci modcluster cluster-snmp ricci
yum groupinstall "High Availability"
yum install postgresql-server
chkconfig --level 123456 ricci on
chkconfig --level 123456 luci on
chkconfig --level 123456 cman on
chkconfig --level 123456 iptables off
chkconfig --level 123456 ip6tables off
chkconfig postgresql on
chkconfig cman on
chkconfig rgmanager on
Now edit the cluster configuration file:
vi vi /etc/cluster/cluster.conf
Make it look like this:
<?xml version="1.0"?>
<cluster config_version="7" name="pgcluster">
<clusternodes>
<clusternode name="10.39.30.7" votes="1" nodeid="1">
<fence/>
</clusternode>
<clusternode name="10.39.30.8" votes="1" nodeid="2">
<fence/>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="PGSQL" nofailback="0" ordered="0" restricted="0">
<failoverdomainnode name="10.39.30.7"/>
<failoverdomainnode name="10.39.30.8"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="10.39.30.6" monitor_link="on" sleeptime="10"/>
<postgres-8 config_file="/var/lib/pgsql/data/postgresql.conf" name="pgsql" shutdown_wait="5" />
</resources>
<service autostart="1" exclusive="0" domain="PGSQL" name="pgsql" recovery="relocate">
<drbd name="drdb-postgres" resource="r0">
<fs device="/dev/drbd0" fsid="6202" fstype="ext3" mountpoint="/var/lib/pgsql" name="pgsql" options="noatime"/>
</drbd>
<ip ref="10.39.30.6"/>
<postgres-8 ref="pgsql"/>
</service>
</rm>
<cman expected_votes="1" two_node="1"/>
<fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/>
</cluster>

DRDB Configuration:

Install the necessary files:
yum install gcc flex make libxslt rpm-build redhat-rpm-config kernel-devel
You need to download and install DRBD manually
wget http://oss.linbit.com/drbd/8.4/drbd-8.4.1.tar.gz 
 the following commands will generate DRBD RPM packages:
tar -xvf *.tar.gz
mkdir -p /root/rpmbuild/SOURCES/
cp drbd*.tar.gz /root/rpmbuild/SOURCES/
cd drbd-8.4.1
./configure --with-rgmanager --enable-spec --with-km
make tgz
rpmbuild --bb drbd.spec --without xen --without heartbeat --without udev --without pacemaker --with rgmanager
rpmbuild --bb drbd-kernel.spec
rpmbuild --bb drbd-km.spec
Now install the newly created packages:
cd /root/rpmbuild/RPMS/x86_64
rpm -i drbd-utils-8.4.1-1.el6.x86_64.rpm drbd-bash-completion-8.4.1-1.el6.x86_64.rpm drbd-8.4.1-1.el6.x86_64.rpm drbd-rgmanager-8.4.1-1.el6.x86_64.rpm drbd-km-2.6.32_279.14.1.el6.x86_64-8.4.1-1.el6.x86_64.rpm
Add your nodes IP addresses to the hosts file on both machines:
vi /etc/hosts
10.39.30.7 RHPG1
10.39.30.8 RHPG2
Create a DRBD configuration file:
vi /etc/drbd.d/r0.res
resource r0 {
   device /dev/drbd0;
   meta-disk internal;
   on RHPG1 {
      address 10.39.30.7:7789;
      disk /dev/sdb1;
   }
   on RHPG2 {
      address 10.39.30.8:7789;
      disk /dev/sdb1;
   }
}
Create the partion /dev/sdb1 but do not format it:
fdisk /dev/sdb
Run on both machines:
drbdadm create-md r0
modprobe drbd
drbdadm up r0
Run on one of the machines to create the file system:
drbdadm -- --overwrite-data-of-peer primary r0
Check the sync status on any of the hosts with:
service drbd status
Create the file system on /dev/drbd0
mkfs.ext3 /dev/drbd0
and move over the PostgreSQL data
mkdir /tmp/pgdata
mount /dev/drbd0 /tmp/pgdata
cp -r /var/lib/pgsql /tmp/pgdata
Wait until the data is synced over the two hosts check the status with:
service drbd status
Unmount the drbd device:
umount /dev/drbd0
and then, on both hosts do:
rm -rf /var/lib/pgsql/*
Restart the drbd service
service drbd restart
the status from:
service drbd status
should show that both hosts are in secondary:
0:r0 Connected Secondary/Secondary UpToDate/UpToDate C
And ready to be managed by rgmanager.

Possibly Related Posts

Tuesday, 16 October 2012

Installing Oracle 11g R2 Express Edition on Ubuntu 64-bit

This are the steps I took to install Oracle 11g R2 Express Edition on an Ubuntu 12.04 LTS (Precise Pangolin) Server and are based on the tutorial found here:

Download the Oracle 11gR2 express edition installer from the link given below:
http://www.oracle.com/technetwork/products/express-edition/downloads/index.html
( You will need to create a free oracle web account if you don't already have it )

Unzip it :
unzip oracle-xe-11.2.0-1.0.x86_64.rpm.zip
Install the following packages :
sudo apt-get install alien libaio1 unixodbc vim
The Red Hat based installer of Oracle XE 11gR2 relies on /sbin/chkconfig, which is not used in Ubuntu. The chkconfig package available for the current version of Ubuntu produces errors and my not be safe to use. So you'll need to create a special chkconfig script, below is a simple trick to get around the problem and install Oracle XE successfully:
sudo vi /sbin/chkconfig
(copy and paste the following into the file )
#!/bin/bash
# Oracle 11gR2 XE installer chkconfig hack for Ubuntu
file=/etc/init.d/oracle-xe
if [[ ! `tail -n1 $file | grep INIT` ]]; then
echo >> $file
echo '### BEGIN INIT INFO' >> $file
echo '# Provides: OracleXE' >> $file
echo '# Required-Start: $remote_fs $syslog' >> $file
echo '# Required-Stop: $remote_fs $syslog' >> $file
echo '# Default-Start: 2 3 4 5' >> $file
echo '# Default-Stop: 0 1 6' >> $file
echo '# Short-Description: Oracle 11g Express Edition' >> $file
echo '### END INIT INFO' >> $file
fi
update-rc.d oracle-xe defaults 80 01
#EOF
Save the above file and provide appropriate execute privilege :
chmod 755 /sbin/chkconfig
Oracle 11gR2 XE requires to set the following additional kernel parameters:
sudo vi /etc/sysctl.d/60-oracle.conf 
(Enter the following)
# Oracle 11g XE kernel parameters
fs.file-max=6815744
net.ipv4.ip_local_port_range=9000 65000
kernel.sem=250 32000 100 128
kernel.shmmax=536870912
(Save the file)

Note: kernel.shmmax = max possible value , e.g. size of physical RAM ( in bytes e.g. 512MB RAM == 512*1024*1024 == 536870912 bytes )

Verify the change :
sudo cat /etc/sysctl.d/60-oracle.conf
Load new kernel parameters:
sudo service procps start
Verify:
sudo sysctl -q fs.file-max
-> fs.file-max = 6815744
Increase the system swap space : Analyze your current swap space by following command :
free -m
Minimum swap space requirement of Oracle 11gR2 XE is 2 GB . In case, your is lesser , you can increase it by following steps in one of my previous posts.

make some more required changes :
sudo ln -s /usr/bin/awk /bin/awk
sudo mkdir -p /var/lock/subsys
sudo touch /var/lock/subsys/listener
Convert the red-hat ( rpm ) package to Ubuntu-package :
sudo alien --scripts -d oracle-xe-11.2.0-1.0.x86_64.rpm
(this may take a long time)

Go to the directory where you created the ubuntu package file in the previous step and enter following commands in terminal :
sudo dpkg --install oracle-xe_11.2.0-2_amd64.deb 
Do the following to avoid getting MEMORY TARGET error ( ORA-00845: MEMORY_TARGET not supported on this system ) :
sudo rm -rf /dev/shm
sudo mkdir /dev/shm
sudo mount -t tmpfs shmfs -o size=2048m /dev/shm
(here size will be the size of your RAM in MBs ).

The reason of doing all this is that on a Ubuntu system /dev/shm is just a link to /run/shm but Oracle requires to have a seperate /dev/shm mount point.

To make the change permanent do the following :

create a file named S01shm_load in /etc/rc2.d :
sudo vi /etc/rc2.d/S01shm_load
Then copy and paste following lines into the file :
#!/bin/sh
case "$1" in
start) mkdir /var/lock/subsys 2>/dev/null
touch /var/lock/subsys/listener
rm /dev/shm 2>/dev/null
mkdir /dev/shm 2>/dev/null
mount -t tmpfs shmfs -o size=2048m /dev/shm ;;
*) echo error
exit 1 ;;
esac
Save the file and provide execute permissions :
chmod 755 /etc/rc2.d/S01shm_load
This will ensure that every-time you start your system, you get a working Oracle environment.

You can now proceed to the Oracle initialization script
sudo /etc/init.d/oracle-xe configure
Enter the following configuration information:
  • A valid HTTP port for the Oracle Application Express (the default is 8080)
  • A valid port for the Oracle database listener (the default is 1521)
  • A password for the SYS and SYSTEM administrative user accounts
  • Confirm password for SYS and SYSTEM administrative user accounts
  • Whether you want the database to start automatically when the computer starts (next reboot).
Before you start using Oracle 11gR2 XE you have to set-up a few more things :

a) Set-up the environmental variables, add following lines to the bottom of /etc/bash.bashrc :
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export ORACLE_SID=XE
export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh`
export ORACLE_BASE=/u01/app/oracle
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
export PATH=$ORACLE_HOME/bin:$PATH
b) execute your .profile to load the changes:
source /etc/bash.bashrc
Start the Oracle 11gR2 XE :
sudo service oracle-xe start
The output should be similar to following :
user@machine:~$ sudo service oracle-xe start
Starting Oracle Net Listener.
Starting Oracle Database 11g Express Edition instance.
user@machine:~$
And you're done :)

Possibly Related Posts

Tuesday, 11 September 2012

Get list of foreign keys in MySQL

Here's a simple query for displaying all foreign keys and their references in a MySQL DB:
select
concat(table_name, '.', column_name) as 'foreign key',
concat(referenced_table_name, '.', referenced_column_name) as 'references'
from
information_schema.key_column_usage
where
referenced_table_name is not null;

Possibly Related Posts

Monday, 10 September 2012

Easy visualisation of database schemas

This is easy using SQLFairy, under Ubuntu, as simple as:
sudo apt-get install sqlfairy
Next, dump your database tables, e.g. for MySQL:
mysqldump -u username -p -d mydatabase > mydatabase.sql
Finally, for a PNG image of your schema:
sqlt-graph -f MySQL -o mydatabase.png -t png mydatabase.sql
If your schema lacks explicit foreign keys, try the –natural-join options (man sqlt-graph, man sqlt-diagram)

Here's an example for a SQLite DB:

Get the schema dump:
echo ".schema" | sqlite3 ~/.liferea_1.4/liferea.db >> liferea.sql
Generate a SVG diagram with:
sqlt-graph -c --natural-join --from=SQLite -t svg -o liferea_schema.svg liferea.sql

Possibly Related Posts

MySQL Export to CSV

If you need the data from a table or a query in a CSV fiel so that you can open it on any spreadsheet software, like Excel you can use something like the following:
SELECT id, name, email INTO OUTFILE '/tmp/result.csv'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
ESCAPED BY ‘\\’
LINES TERMINATED BY '\n'
FROM users WHERE 1
Or you can use sed:

mysql -u username -ppassword database -B -e "SELECT * FROM table;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > filename.csv
Explanation:

username is your mysql username
password is your mysql password
database is your mysql database
table is the table you want to export

The -B option will delimit the data using tabs and each row will appear on a new line.
The -e option denotes the MySQL command to run, in our case the "SELECT" statement.
The "sed" command used here contains three sed scripts:

s/\t/","/g;s/^/"/ - this will search and replace all occurences of 'tabs' and replace them with a ",".

s/$/"/; - this will place a " at the start of the line.

s/\n//g - this will place a " at the end of the line.

You can find the exported CSV file in the current directory. The name of the file is filename.csv.

However if there are a lot of tables that you need to export, you'll need a script like this:
#!/bin/bash
#### Begin Configuration ####
DB="mydb"
MYSQL_USER="root"
MYSQL_PASSWD='mypass'
MYSQL_HOST="127.0.0.1"
MYSQL_PORT="3306"
MYSQL="/usr/bin/mysql"
#### End Configuration ####
MYSQL_CMD="$MYSQL -u $MYSQL_USER -p$MYSQL_PASSWD -P $MYSQL_PORT -h $MYSQL_HOST"
TABLES=`$MYSQL_CMD --batch -N -D $DB -e "show tables"`
for TABLE in $TABLES
do
SQL="SELECT * FROM $TABLE;"
OUTFILE=$TABLE.csv
$MYSQL_CMD --database=$DB --execute="$SQL" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $OUTFILE
done
Just be sure to change the configuration section to meet your needs.
Name the file something like: export_csv.sh and be sure to make it executable. In Linux, do something like:

chmod +x ./export_csv.sh
If you want to have all of the exported files in a certain directory, you could either modify the script or just make the cirectory, "cd" into it, and then run the script. It assumes you want to create the files in the current working directory.
To change that behavior, you could easily modify the "OUTFILE" variable to something like:
OUTFILE="/my_path/$TABLE.csv"

Possibly Related Posts

Get Schema from SQLite DB

In SQLite, schemas are stored in the table SQLITE_MASTER. You can easily retrieve it with a command like:
echo ".schema" | sqlite3 ~/.liferea_1.4/liferea.db >> liferea.sql

Possibly Related Posts

Tuesday, 31 July 2012

Can't start listener for XE :permission denied

I just reinstalled Oracle XE for Debian / Ubuntu and when I was about to start the listner, I got an error message about missing permissions.

Doing:
cd /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/bin
strace ./lsnrctl start
I found that it was trying to access /var/tmp/.oracle but this directory is owned by root, so:
chown -R oracle:dba /var/tmp/.oracle
chown -R oracle:dba /var/run/.oracle

And now it works correctly.

Possibly Related Posts

Tuesday, 22 May 2012

Export multiple schemas from Oracle

For the examples to work we must first create a directory object you can access. The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system.
sqlplus / AS SYSDBA
CREATE OR REPLACE DIRECTORY DUMP_DIR AS '/home/oracle/dumpdir/';
You can use expdp like this
expdp "'/ as sysdba'" dumpfile=TEST.dmp directory=DUMP_DIR logfile=TEST.log schemas=test1,test2,test3,test4
But if you want one separate file for each export, you can use a shell script like this:
#!/bin/bash
export_schema=$1
expdp "'/ as sysdba'" dumpfile=${export_schema}.dmp directory=DUMP_DIR logfile=${export_schema}.log schemas=${export_schema}
# end of script
Now run the script:
exp_script.sh TEST1
or
exp_script.sh TEST2
Or if you prefer a one line script:
for export_schema in TEST1 TEST2 TEST3; do expdp "'/ as sysdba'" dumpfile=${export_schema}.dmp directory=DUMP_DIR logfile=${export_schema}.log schemas=${export_schema}; done;


Possibly Related Posts

Wednesday, 21 March 2012

View running processes in Oracle DB

This will show you a list of all running processes:
SET LINESIZE 200
SET PAGESIZE 200
SELECT PROCESS pid, sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER';
Identify database SID based on OS Process ID

use the following SQL query, when prompted enter the OS process PID:
SET LINESIZE 100
col sid format 999999
col username format a20
col osuser format a15
SELECT b.spid,a.sid, a.serial#,a.username, a.osuser
FROM v$session a, v$process b
WHERE a.paddr= b.addr
AND b.spid='&spid'
ORDER BY b.spid;
For making sure you are targeting the correct session, you might want to review the SQL associated with the offensive task, to view the SQL being executed by the session you can use the following SQL statement:
SELECT
b.username, a.sql_text
FROM
v$sqltext_with_newlines a, v$session b, v$process c
WHERE
c.spid = '&spid'
AND
c.addr = b.paddr
AND
b.sql_address = a.address;
Killing the session

The basic syntax for killing a session is shown below.
ALTER SYSTEM KILL SESSION 'sid,serial#';
In a RAC environment, you optionally specify the INST_ID, shown when querying the GV$SESSION view. This allows you to kill a session on different RAC node.
ALTER SYSTEM KILL SESSION 'sid,serial#,@inst_id';
The KILL SESSION command doesn't actually kill the session. It merely asks the session to kill itself. In some situations, like waiting for a reply from a remote database or rolling back transactions, the session will not kill itself immediately and will wait for the current operation to complete. In these cases the session will have a status of "marked for kill". It will then be killed as soon as possible.

In addition to the syntax described above, you can add the IMMEDIATE clause.
ALTER SYSTEM KILL SESSION 'sid,serial#' IMMEDIATE;
This does not affect the work performed by the command, but it returns control back to the current session immediately, rather than waiting for confirmation of the kill.

If the marked session persists for some time you may consider killing the process at the operating system level. Before doing this it's worth checking to see if it is performing a rollback. If the USED_UREC value is decreasing for the session in question you should leave it to complete the rollback rather than killing the session at the operating system level.

Possibly Related Posts

Tuesday, 20 March 2012

Unlock Oracle user account

Here's how to lock or unlock Oracle database user accounts.
ALTER USER username ACCOUNT LOCK;
ALTER USER username ACCOUNT UNLOCK;
you may also have to use:
GRANT connect, resource TO username;
to solve the "ORACLE ERROR:ORA-28000: the account is locked" error.

Possibly Related Posts

Monday, 12 March 2012

Duplicate Oracle Schema

Using imp/exp:
Export the database using:
exp 'system/password' owner=schema_to_be_duplicated file=filename.dmp log=logfile.log
Then import the dump file into the target schema:
imp 'system/password' fromuser=schem_to_be_duplicated touser=target_username file=
filename.dmp
But keep in mind that the target schema must be available on the database. If it does not, then you must have to create it first using CREATE USER command). The import does not create user for you it only migrates objects & data within schemas.

Using impdp/expdp:

We must first create a directory object you can access. The directory object is only a pointer to a physical directory, creating it does not actually create the physical directory on the file system.
sqlplus / AS SYSDBACREATE OR REPLACE DIRECTORY DUMP_DIR AS '/home/oracle/dumpdir/';

Export with:
expdp schema_to_be_duplicated/password DUMPFILE=filename.dmp DIRECTORY=dmpdir
Import with:
impdp REMAP_SCHEMA=schema_to_be_duplicated:new_schema DUMPFILE=filename.dmp DIRECTORY=dmpdir EXCLUDE=JOB
you will be asked for a username and password, the default is system/system

Remember that you must first create the destination user:
CREATE USER new_schema IDENTIFIED BY new_password DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users;
GRANT CREATE session, CREATE table, CREATE view, CREATE procedure, CREATE synonym, CREATE SEQUENCE, CREATE TRIGGER, CREATE TYPE, CREATE MATERIALIZED VIEW, CREATE DATABASE LINK,Debug Any Procedure,Debug Connect Session TO new_schema;
ALTER USER new_schema QUOTA UNLIMITED ON USERS;

Possibly Related Posts

Thursday, 16 February 2012

List existing databases and tables in Oracle

Oracle has no "databases" but "schemas", you can list them with:
SELECT username FROM all_users ORDER BY username;
Or with:
SELECT username, account_status FROM dba_users ORDER BY 1;
Or with:
select distinct username from dba_objects;
Or with:
SELECT DISTINCT owner FROM dba_objects ORDER BY 1;
When connected to oracle you'll use by default the schema corresponding to your username (connecting as SCOTT all objects created by you will belong to SCOTT's schema) and you'll also be able to use objects in different schemas that you've been granted rights on. Say you are SYSTEM and you want to read all entries from table A that resides in SCOTT's schema, you'll write something like:
SELECT * FROM SCOTT.A;
You can also list existing tables with:
SELECT owner, table_name FROM dba_tables;
Or if you do not have access to DBA_TABLES, you can see all the tables that your account has access to through the ALL_TABLES view:
SELECT owner, table_name FROM all_tables;
If you are only concerned with the tables that you own, not those that you have access to, you could use USER_TABLES
SELECT table_name FROM user_tables;
Since USER_TABLES only has information about the tables that you own, it does not have an OWNER column-- the owner, by definition, is you.

Oracle also has a number of legacy data dictionary views-- TAB, DICT, TABS, and CAT for example-- that could be used. In general, I would not suggest using these legacy views unless you absolutely need to backport your scripts to Oracle 6. Oracle has not changed these views in a long time so they often have problems with newer types of objects. For example, the TAB and CAT views both show information about tables that are in the user's recycle bin while the [DBA|ALL|USER]_TABLES views all filter those out. CAT also shows information about materialized view logs with a TABLE_TYPE of "TABLE" which is unlikely to be what you really want. DICT combines tables and synonyms and doesn't tell you who owns the object.

Possibly Related Posts

Monday, 17 October 2011

Drop all tables in a MySQL database

If you whant to drop all tables from one MySQL DB without droping the DB, you can use this command:
mysqldump -u[USERNAME] -p[PASSWORD] --add-drop-table --no-data [DATABASE] | grep ^DROP | mysql -u[USERNAME] -p[PASSWORD] [DATABASE]

Possibly Related Posts

Thursday, 22 September 2011

Backuppc and MySQL

The best way to backup a MySql Server using Backuppc is to use a pre-dump script.

you can use $Conf{DumpPreUserCmd} to issue a MysqLDump

Stdout from these commands will be written to the Xfer (or Restore) log file, note that all Cmds are executed directly without a shell, so the prog name needs to be a full path and you can't include shell syntax like redirection and pipes; put that in a script if you need it.

So in our case we would create a script, on the Backuppc client, to dump all databases into a file:
vi /usr/local/sbin/myBkp.sh
and paste the following into it:
#!/bin/bash
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
DEST="/backup/mysqlDump.sql"
MYSQLUSER="root"
MYSQLPASS="mypassword"
# no need to change anything below...
#####################################################
LOCKFILE=/tmp/myBkup.lock
if [ -f $LOCKFILE ]; then
echo "Lockfile $LOCKFILE exists, exiting!"
exit 1
fi
touch $LOCKFILE
echo "== MySQL Dump Starting $(date) =="
$MYSQLDUMP --single-transaction --user=${MYSQLUSER} --password="${MYSQLPASS}" -A > ${DEST}
echo "== MySQL Dump Ended $(date) =="
rm $LOCKFILE
make the script executable:
chmod +x /usr/local/sbin/myBkp.sh 
and set $Conf{DumpPreUserCmd} with:
$sshPath -q -x -l root $host /usr/local/sbin/myBkp.sh
Now you just have to make shure that Backuppc is getting the /backup folder (or whatever folder you have set in the script) and you can also exclude the /var/lib/mysql folder from backuppc backups.

Possibly Related Posts

Sunday, 4 September 2011

MySQL - Converting to Per Table Data File for InnoDB

Issue with shared InnoDB /var/lib/mysql/ibdata1 storage
InnoDB tables currently store data and indexes into a shared tablespace (/var/lib/mysql/ibdata1). Due to the shared tablespace, data corruption for one InnoDB table can result in MySQL failing to start up on the entire machine. Repairing InnoDB corruption can be extremely difficult to perform and can result in data loss for tables that were not corrupted originally during that repair process.

Since MySQL 5.5 will be using InnoDB as the default storage engine, it is important to consider the consequences of continuing to utilize the shared tablespace in /var/lib/mysql/ibdata1Changing to per-table tablespace with innodb_file_per_table

As an option to resolve the issue, MySQL has a configuration variable called innodb_file per_table. To use this variable, the following could be placed into /etc/my.cnf to convert InnoDB to a per table file for each InnoDB engine table:
innodb_file_per_table=1
After adding the line, MySQL would need to be restarted on the machine.
The result for using that line in /etc/my.cnf would cause any databases after the line is added to create .idb files in /var/lib/mysql/database/ location. Please note that the shared tablespace will still hold internal data dictionary and undo logs.

Converting old InnoDB tables
Any old databases with InnoDB tables set to previously share the tablespace in ibdata1 will still be using that file, so those old databases would need to be switched to the new system. The following command in MySQL CLI would create a list of InnoDB engine tables and a command to run for each to convert them to the new innodb_file_per_table system:
select concat('alter table ',TABLE_SCHEMA ,'.',table_name,' ENGINE=InnoDB;') as command FROM INFORMATION_SCHEMA.tables where table_type='BASE TABLE' and engine = 'InnoDB';
An example for Roundcube on my test machine shows the following return upon running the prior command:
alter table roundcube.cache ENGINE=InnoDB;
alter table roundcube.contacts ENGINE=InnoDB;
alter table roundcube.identities ENGINE=InnoDB;
alter table roundcube.messages ENGINE=InnoDB;
alter table roundcube.session ENGINE=InnoDB;
alter table roundcube.users ENGINE=InnoDB;
You would then simply need to issue the commands noted by MySQL CLI to then covert each table to the new innodb_file_per_table format.

Please note that these commands would only need to be run in MySQL command line for the conversion.

You can use the following script:
#!/bin/bash
MYSQL="$(which mysql)"
MYSQLUSER="root"
MYSQLPASS="mypassword"
# no need to change anything below...
#####################################################
TBLS=$(mysql -u $MYSQLUSER -p$MYSQLPASS -Bse "select concat(TABLE_SCHEMA ,'.',table_name) as tbl FROM INFORMATION_SCHEMA.tables where table_type='BASE TABLE' and engine = 'InnoDB';")
for tbl in $TBLS
do
echo "Converting table $tbl"
mysql -u $MYSQLUSER -p$MYSQLPASS -Bse "alter table $tbl ENGINE=InnoDB;"
done
Possible Issues for Converting Old InnoDB Tables
1. Possible system load might occur during the conversion
2. Possible issues with drive space filling up for the conversion

Possibly Related Posts

How to determine type of mysql database

To determine the storage engine being used by a table, you can use show table status. The Engine field in the results will show the database engine for the table. Alternately, you can select the engine field from information_schema.tables.

To get the type per database:
mysql -u root -p'<password>' -Bse 'select distinct table_schema, engine from information_schema.tables'
For a specific table use:
select engine from information_schema.tables where table_schema = 'schema_name' and table_name = 'table_name'
You can change between storage engines using alter table:
alter table the_table engine = InnoDB;
Where, of course, you can specify any available storage engine.

Possibly Related Posts

Monday, 22 August 2011

Performance Tuning MySQL for Zabbix

On my previous post I've shared some tips on how to tune ZABBIX configuration to get better results,however the most important tunning you have to do is to the data base server. Remember that this values depend on how much memory you have available on your server, here is how I've configured my MySQL server:

1. use a tmpfs tmpdir, create a folder like /mytmp and In /etc/my.cnf configure:
tmpdir=/mytmp
in /etc/fstab i put:
tmpfs /mytmp tmpfs size=1g,nr_inodes=10k,mode=700,uid=102,gid=105 0 0
You'll have to mkdir /mytmp and the numeric uid and gid values for your mysql user+group need to go on that line. Then you should be able to mount /mytmp and use tmpfs for mysql's temp directory. I don't know about the size and nr_inodes options there, I just saw those in linux tmpfs docs on the web and they seemed reasonable to me.

2. Buffer cache/pool settings.

In /etc/my.cnf jack up innodb_buffer_pool_size as much as possible. If you use /usr/bin/free the value in the "+/- buffer cache" row under the "free" column shows you how much buffer cache you have. I've also setup innodb to use O_DIRECT so that the data cached in the innodb buffer pool would not be duplicated in the filesystem buffer cache. So, in /etc/my.cnf:
innodb_buffer_pool_size=8000M
innodb_flush_method=O_DIRECT
3. Size the log files.

The correct way to resize this is documented here:

http://dev.mysql.com/doc/refman/5.0/en/adding-and-removing.html

In /etc/my.cnf the value I'm going to try is:
innodb_log_file_size=64M
A too small value means that MySQL is constantly flushing from the logfiles to the table spaces. It is better to increase this size on write-mostly databases to keep zabbix streaming to the logfiles and not flushing into the tablespaces constantly. However, the penalty is slower shutdown and startup times.

4. other parameters
innodb_file_per_table
Use file_per_table to keep tablespaces more compact and use "optimize table" periodically. And when you set this value in my.cnf you don't get an actual file_per_table until you run an optimize on all the tables. This'll take a long time on the large zabbix history* and trends* tables.
Turn on slow query logging:
log_slow_queries=/var/log/mysql.slow.log
This setting seems to affect the hit rate of Threads_created per Connection.
thread_cache_size=4
query_cache_limit=1M
query_cache_size=128M
tmp_table_size=256M
max_heap_table_size=256M
table_cache=256
max_connections = 400
join_buffer_size=256k
read_buffer_size=256k
read_rnd_buffer_size=256k 
This should help a lot for high volume writes.
innodb_flush_log_at_trx_commit=2

Possibly Related Posts

Simple Zabbix tunning tips

If you're getting gaps on ZABBIX's graphs and unknown status on some items, a little to often it might mean that you're monitoring server is low on performance, here are some general rules you can follow to boost ZABBIX performance:

  • If the DB is located on the same host as zabbix, change zabbix_server.conf so it uses a Unix socket to connect to the DB
  • Increase the number of pollers, trapers and pingers on the server config but don't overdo it.
    • General rule - keep value of this parameter as low as possible. Every additional instance of zabbix_server adds known overhead, in the same time, parallelism is increased. Optimal number of instances is achieved when queue, on average, contains minimum number of parameters (ideally, 0 at any given moment). This value can be monitored by using internal check zabbix[queue] or you can look at "Administration -> Queue" on the web interface.
  • increase the number of processes on the agents configuration, again, don't overdo it.
  • Change some of the items to use active checks (leave a few as regular checks so you can get availability information, leave stuff like host status as a regular check). Remember that the hostname set on the zabbix agent conf file must match the hostname given to the host on the web interface.
    • A regular check is initiated by ZABBIX server, it periodically sends requests to an agent to get latest info. The agent is passive, it just processes requests sent by the server.
    • An active check works the following way. ZABBIX agents connect to ZABBIX server to get a list of all checks for a host. Then, periodically, send required information to ZABBIX server. Note that ZABBIX server does not initiate anything. ZABBIX agent does all active work. This doesn't require polling on server side, thus it significantly (1.5x-2x) improve performance of ZABBIX server but if the host goes down the server won't get any information.
  • monitor required parameters only

    However the most important tunning you have to make is to the DB server, in my next post I'll give you some advice on how to tune a MySQL server to boost ZABBIX performance.

Possibly Related Posts