Quantcast
Channel: EasyOraDBA
Viewing all 263 articles
Browse latest View live

Of Planes & Engineers

$
0
0

Moving from an Architect’s role to an Engineers role is a bit of a misconstruct when it comes to IT. But It all depends in where your passion lies.

Let me give you an analogy between IT and Aviation industry. I chose aviation as an example because they are a high tech industry with engineering talent similar if not better than ours. So let’s say Airbus is going to design a new plane and you will be the Architect for this planes design and execution. Your end goal is to make a fuel efficient & safe plane which sells for your company. On the other hand the same project requires a team of engineers to ensure the wing of the new plane meets the project expectations and integrate the new wing design with the rest of the fuselage. The same Engineers can be called in whenever the plane has a stability issue. Whereas the Architects role waters down once the product is launched but Engineers continue to play an important role in daily OPS.

I have been fortunate enough to have worked in both roles. Both roles are equally important whether it’s a plane your building or a scalable Web App. But to calculate drag co-efficient,lift angles & wind vortices to minimize drag on the wing is just a thrill on it’s own. Now go figure where lies my true passion 🙂

The post Of Planes & Engineers appeared first on EasyOraDBA.


Move Oracle Database 12c from On-Premise to AWS RDS Oracle Instance using SQL Developer

$
0
0

Amazon Web Services has been gaining popularity in the last few years since cloud computing has been in the spotlight. Slowly the Traditional Enterprises are making the journey to the cloud. Oracle is considered one of the most mission critical application in the Enterprise. Moving Oracle Database to cloud can bring its own benefits both from an operational and financial perspective.

In this exercise we will move an on-premise Oracle DB schema to an AWS RDS Instance running Oracle 12cR1

 

Pre-requisites :

1. You already have a source Oracle database installed

2. You know how to provision an AWS RDS Oracle Instance

3. You have access to both instances

4. You have basic understanding of AWS S3 and AWS console

5. You have the latest version of SQL Developer installed on your machine

Source DB:

Oracle 12cR1 (12.1.0.2) running on CentOS 7.1

Destination DB:

Oracle 12cR1 running on AWS RDS Instance

High Level Steps to Migrate:

1. Create the destination Oracle 12CR1 instance on AWS. It is one of the easiest things to provision an Oracle DB on AWS RDS

2.  Connect to Both Source(on-Prem) and Destination(AWS) Database from SQL Developer

3. Go to Tools > Database Copy and Select Source and Destination Databases

I prefer to do Tablespace Copy since most of the Apps i work reside in a single tablespace. But this depends on your choice. You can either chose Objects, Schemas or even entire Tablespaces to be copied across.

IMPORTANT : Make sure you have created the source schema in destination database before proceeding to next step else you will get an error “User does not exist”

In Destination AWS RDS run below commands

SQL> create user <source-schema-name> identified by <password123>;

SQL> grant dba to <source-schema-name>;

4. Start the Database Copy

5. Check from Performance Insights Console to Check whats happening in the background

 

6. Query the Destination Database to See if the Objects are valid and have arrived

SQL> select * from user_tables;

SQL> select * from dba_objects where status=’INVALID’;

The post Move Oracle Database 12c from On-Premise to AWS RDS Oracle Instance using SQL Developer appeared first on EasyOraDBA.

Generate Fake Data using Python

$
0
0

Being a data engineer, one of the tasks which you have to do almost on a daily basis is load huge amounts of data into your data warehouse or data lakes. Sometimes to do benchmark load times or emulate performance tuning issues in your test environment, you need to use test datasets. While their is a lot of very good huge open datasets available on Kaggle and AWS

But instead of having actual data all you need is a CSV file with dummy data in it. Fear not, up comes Python to the resuce. Python is the golden goose in the age of information not only can it help you sort through massive amounts of data it can also help you generate data.

Faker is a Python package which can generate fake data for you. First you need to pip install faker. For this excercise we are using Python 3.7.2

$ python -m pip install faker

— Script to Generate a CSV file with Fake Data and 1 Billion Rows —

Caution : The file size will be about 1.3GB and it can really hammer your machine. I have an Ec2 instance on which i generate this test data and let it leave running in the background. You can use multiprocessor in Python and hammer all cores but that is a discussion worthy of it’s own blog post.

import csv
import random
from time import time
from decimal import Decimal
from faker import Faker

RECORD_COUNT = 1000000000
fake = Faker()


def create_csv_file():
    with open('/u01/users1.csv', 'w', newline='') as csvfile:
        fieldnames = ['userid', 'username', 'firstname', 'lastname', 'city','state', 'email', 'phone', 'cardno', 'likesports', 'liketheatre','likeconcerts','likejazz','likeclassical','likeopera','likerock','likevegas'
,'likebroadway','likemusicals']
        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)

        writer.writeheader()
        for i in range(RECORD_COUNT):
            writer.writerow(
                {
                    'userid': fake.ean8(),
                    'username': fake.user_name(),
                    'firstname': fake.first_name(),
                    'lastname': fake.last_name(),
                    'city': fake.city(),
                    'state': fake.state_abbr(),
                    'email': fake.email(),
                    'phone': fake.phone_number(),
                    'cardno': fake.credit_card_number(card_type=None),
                    'likesports': fake.null_boolean(),
                    'liketheatre': fake.null_boolean(),
                    'likeconcerts': fake.null_boolean(),
                    'likejazz': fake.null_boolean(),
                    'likeclassical': fake.null_boolean(),
                    'likeopera': fake.null_boolean(),
                    'likerock': fake.null_boolean(),
                    'likevegas': fake.null_boolean(),
                    'likebroadway': fake.null_boolean(),
                    'likemusicals': fake.null_boolean(),
                }
            )

if __name__ == '__main__':
    create_csv_file()

This will create a file users1.csv with a billion rows and generated fake data which is almost like real data

Attached Script :

The post Generate Fake Data using Python appeared first on EasyOraDBA.

Ebook : Advanced Architecture of Oracle Database on AWS

Python Script to Copy-Unload Data to Redshift from S3

$
0
0

 

import psycopg2
import time
import sys
import datetime
from datetime import date
datetime_object = datetime.datetime.now()
print ("Start TimeStamp")
print ("---------------")
print(datetime_object)
print("")

#Progress Bar Function
def progressbar(it, prefix="", size=60, file=sys.stdout):
    count = len(it)
    def show(j):
        x = int(size*j/count)
        file.write("%s[%s%s] %i/%i\r" % (prefix, "#"*x, "."*(size-x), j, count))
        file.flush()
    show(0)
    for i, item in enumerate(it):
        yield item
        show(i+1)
    file.write("\n")
    file.flush()

#Obtaining the connection to RedShift
con=psycopg2.connect(dbname= 'dev', host='redshift.amazonaws.com',
port= '5439', user= 'awsuser', password= '*****')

#Copy Command as Variable
copy_command="copy users from 's3://redshift-test-bucket/allusers_pipe.txt' credentials 'aws_iam_role=arn:aws:iam::775088:role/REDSHIFTROLE' delimiter '|' region 'ap-southeast-2';"

#Unload Command as Variable
unload_command="unload ('select * from users') to 's3://redshift-test-bucket/users_"+str(datetime.datetime.now())+".csv' credentials 'aws_iam_role=arn:aws:iam::7755088:role/REDSHIFTROLE' delimiter '|' region 'ap-southeast-2';"

#Opening a cursor and run copy query
cur = con.cursor()
cur.execute("truncate table users;")
cur.execute(copy_command)
con.commit()

#Display Progress Bar and Put a sleep condition in seconds to make the program wait
for i in progressbar(range(100), "Copying Data into Redshift: ", 10):
    time.sleep(0.1) # any calculation you need

print("")

#Display Progress Bar and Put a sleep condition in seconds to make the program wait
for i in progressbar(range(600), "Unloading Data from Redshift to S3: ", 60):
    time.sleep(0.1) # any calculation you need

print("")

#Opening a cursor and run unload query
cur.execute(unload_command)

#Close the cursor and the connection
cur.close()
con.close()

datetime_object_2 = datetime.datetime.now()
print ("End TimeStamp")
print ("-------------")
print(datetime_object_2)
print("")

The post Python Script to Copy-Unload Data to Redshift from S3 appeared first on EasyOraDBA.

Oracle Database 19c (19.3.0) for Linux is available for download

$
0
0

Oracle Database 19c (19.3.0) for Linux is available for download as of now from OTN and eDelivery. For those of you who started with Oracle Database 19.2 already, the Updates (RU) 19.3.0 for the database and GI are available as well for Linux and Solaris. Oracle Database 19c on premises for Linux You can download…

via Oracle Database 19c (19.3.0) for Linux is available for download — Upgrade your Database – NOW!

The post Oracle Database 19c (19.3.0) for Linux is available for download appeared first on EasyOraDBA.

Automatic Indexing in Oracle Database 19c and Other New Features

$
0
0

Automatic Indexing (AI) is probably the most important new feature of Oracle Database 19c and AI is arguably one of the best example of AI in the IT industry. But there is much more that came along with 19c. Here is my choice of the top 10 least known (for now at least) new features […]

via What else besides Automatic Indexing is new in Oracle Database 19c? — Julian Dontcheff’s Database Blog

The post Automatic Indexing in Oracle Database 19c and Other New Features appeared first on EasyOraDBA.

Build EC2 Infrastructure using Terraform

$
0
0

We will use Infrastructure automation tool Terraform create an EC2 image in region ‘us-east-1’ with ami id as ‘ami-408c7f28’. If you need to create the ec2 instance in any other region you would need another AMI ID, since AMI’s are region specific.

1. Download and Install Terraform for Linux from the Terraform Website : https://www.terraform.io/downloads.html

Note : Install awscli and configure your AWS credentials before we begin

On Linux the download is a zip file containing only 1 file. Unzip to any directory and copy the file ‘terraform’ to /usr/bin


2. Create a Terraform configuration file in your current directory


$ vim ec2.tf

provider “aws” {
  region     = “us-east-1”
}
resource “aws_instance” “example” {
  ami           = “ami-2757f631”
  instance_type = “t2.micro”
  key_name      = “us-east-1-keypair”
}


3. Initiate Terraform

$ terraform init
Initializing modules…
– redshift in ../..
Downloading terraform-aws-modules/security-group/aws 3.0.1 for sg…
– sg in .terraform/modules/sg/terraform-aws-modules-terraform-aws-security-group-a332a3b/modules/redshift
– sg.sg in .terraform/modules/sg/terraform-aws-modules-terraform-aws-security-group-a332a3b
Downloading terraform-aws-modules/vpc/aws 2.5.0 for vpc…
– vpc in .terraform/modules/vpc/terraform-aws-modules-terraform-aws-vpc-6c31234

Initializing the backend…

Initializing provider plugins…
– Checking for available provider plugins…
– Downloading plugin for provider “aws” (terraform-providers/aws) 2.14.0…

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = “…” constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.aws: version = “~> 2.14”

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running “terraform plan” to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.


4. Apply Terraform Configuration

Note 1: From Terraform 0.11 and above you do not have to run ‘terraform plan’ command

Note 2 : For security purpose it is not good practise to store access_key or secret_key in the .tf file. If you have installed awscli then Terraform will take your AWS credentials from ‘~/.aws/credentials’ or IAM credentials.

$ terraform apply

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_instance.example will be created
  + resource “aws_instance” “example” {
      + ami                          = “ami-2757f631”
      + arn                          = (known after apply)
      + associate_public_ip_address  = (known after apply)
      + availability_zone            = (known after apply)
      + cpu_core_count               = (known after apply)
      + cpu_threads_per_core         = (known after apply)
      + get_password_data            = false
      + host_id                      = (known after apply)
      + id                           = (known after apply)
      + instance_state               = (known after apply)
      + instance_type                = “t2.micro”
      + ipv6_address_count           = (known after apply)
      + ipv6_addresses               = (known after apply)
      + key_name                     = “us-east-1-keypair”
      + network_interface_id         = (known after apply)
      + password_data                = (known after apply)
      + placement_group              = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns                  = (known after apply)
      + private_ip                   = (known after apply)
      + public_dns                   = (known after apply)
      + public_ip                    = (known after apply)
      + security_groups              = (known after apply)
      + source_dest_check            = true
      + subnet_id                    = (known after apply)
      + tenancy                      = (known after apply)
      + volume_tags                  = (known after apply)
      + vpc_security_group_ids       = (known after apply)

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + snapshot_id           = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = (known after apply)
          + iops                  = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only ‘yes’ will be accepted to approve.

  Enter a value: yes
 
  aws_instance.example: Creating…
aws_instance.example: Still creating… [10s elapsed]
aws_instance.example: Still creating… [20s elapsed]
aws_instance.example: Still creating… [30s elapsed]
aws_instance.example: Still creating… [40s elapsed]
aws_instance.example: Creation complete after 42s [id=i-0cf9d04e3e926b975]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.



5. Check the state of your infrastructure


You can go check in your AWS console > EC2 Dashboard and you will see the instance. To see it from terraform run the below command

$ terraform show

# aws_instance.example:
resource “aws_instance” “example” {
    ami                          = “ami-2757f631”
    arn                          = “arn:aws:ec2:us-east-1:775867435088:instance/i-0cf9d04e3e926b975”
    associate_public_ip_address  = true
    availability_zone            = “us-east-1c”
    cpu_core_count               = 1
    cpu_threads_per_core         = 1
    disable_api_termination      = false
    ebs_optimized                = false
    get_password_data            = false
    id                           = “i-0cf9d04e3e926b975”
    instance_state               = “running”
    instance_type                = “t2.micro”
    ipv6_address_count           = 0
    ipv6_addresses               = []
    key_name                     = “us-east-1-keypair”
    monitoring                   = false
    primary_network_interface_id = “eni-0f1f6798c37c9d210”
    private_dns                  = “ip-172-31-80-192.ec2.internal”
    private_ip                   = “172.31.80.192”
    public_dns                   = “ec2-100-24-2-45.compute-1.amazonaws.com”
    public_ip                    = “100.24.2.45”
    security_groups              = [
        “default”,
    ]
    source_dest_check            = true
    subnet_id                    = “subnet-e5790ecb”
    tenancy                      = “default”
    volume_tags                  = {}
    vpc_security_group_ids       = [
        “sg-55c2f911”,
    ]

    credit_specification {
        cpu_credits = “standard”
    }

    root_block_device {
        delete_on_termination = true
        iops                  = 100
        volume_id             = “vol-0b061617e365e8123”
        volume_size           = 8
        volume_type           = “gp2”
    }
}


6. Destroy the Ec2 instance
The beauty of terraform is it maintains state of your infrastructure. You can remove the ec2 instance by running just 1 simple command

$ terraform destroy

aws_instance.example: Refreshing state… [id=i-0cf9d04e3e926b975]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  – destroy

Terraform will perform the following actions:

  # aws_instance.example will be destroyed
  – resource “aws_instance” “example” {
      – ami                          = “ami-2757f631” -> null
      – arn                          = “arn:aws:ec2:us-east-1:775867435088:instance/i-0cf9d04e3e926b975” -> null
      – associate_public_ip_address  = true -> null
      – availability_zone            = “us-east-1c” -> null
      – cpu_core_count               = 1 -> null
      – cpu_threads_per_core         = 1 -> null
      – disable_api_termination      = false -> null
      – ebs_optimized                = false -> null
      – get_password_data            = false -> null
      – id                           = “i-0cf9d04e3e926b975” -> null
      – instance_state               = “running” -> null
      – instance_type                = “t2.micro” -> null
      – ipv6_address_count           = 0 -> null
      – ipv6_addresses               = [] -> null
      – key_name                     = “us-east-1-keypair” -> null
      – monitoring                   = false -> null
      – primary_network_interface_id = “eni-0f1f6798c37c9d210” -> null
      – private_dns                  = “ip-172-31-80-192.ec2.internal” -> null
      – private_ip                   = “172.31.80.192” -> null
      – public_dns                   = “ec2-100-24-2-45.compute-1.amazonaws.com” -> null
      – public_ip                    = “100.24.2.45” -> null
      – security_groups              = [
          – “default”,
        ] -> null
      – source_dest_check            = true -> null
      – subnet_id                    = “subnet-e5790ecb” -> null
      – tags                         = {} -> null
      – tenancy                      = “default” -> null
      – volume_tags                  = {} -> null
      – vpc_security_group_ids       = [
          – “sg-55c2f911”,
        ] -> null

      – credit_specification {
          – cpu_credits = “standard” -> null
        }

      – root_block_device {
          – delete_on_termination = true -> null
          – iops                  = 100 -> null
          – volume_id             = “vol-0b061617e365e8123” -> null
          – volume_size           = 8 -> null
          – volume_type           = “gp2” -> null
        }
    }

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only ‘yes’ will be accepted to confirm.

  Enter a value: yes

aws_instance.example: Destroying… [id=i-0cf9d04e3e926b975]
aws_instance.example: Still destroying… [id=i-0cf9d04e3e926b975, 10s elapsed]
aws_instance.example: Still destroying… [id=i-0cf9d04e3e926b975, 20s elapsed]
aws_instance.example: Still destroying… [id=i-0cf9d04e3e926b975, 30s elapsed]
aws_instance.example: Destruction complete after 34s

Destroy complete! Resources: 1 destroyed.

The post Build EC2 Infrastructure using Terraform appeared first on EasyOraDBA.


High level Steps to Migrate from On-Premise 11g(11.2.0.4) to Oracle 18c Autonomous Database using Export/Import

$
0
0
  1. Create Oracle Autonomous Transaction Processing Database(ATP) in your Oracle Cloud Account. Create wallet and down the wallet to your bastion host from where you will run the import command. Make sure you create a folder for the wallet, unzip the wallet zip file and set TNS_ADMIN parameter

PATH=$PATH:$HOME/.local/bin:$HOME/bin:/u01/app/oracle/18.3.0/bin
ORACLE_HOME=/u01/app/oracle/18.3.0
ORACLE_BASE=/u01/app/oracle
TNS_ADMIN=/home/opc/orawallet
export ORACLE_HOME ORACLE_BASE TNS_ADMIN

export PATH

cd /home/opc/orawallet
unzip Wallet_easyoradba.zip

sqlplus admin/**********@easyoradba_high

  1. Export Schema from 11g Source Database using expdp utility

expdp system@swx directory=export_dir logfile=export.log dumpfile=expswx.dmp schemas=swx parallel=16

  1. Create a Bucket in Oracle Cloud Account Object Storage, lets call it ‘easyoradba-migrate’
  2. Upload DMP file from Step 2 to your Oracle Cloud Bucket file name eg “expswx.dmp”
  3. Create Pre-Authenticate Request to get read access on the object expswx.dmp, this is required to run the import in ATP
  4. Logback in to Oracle Cloud ATP with Admin user and Create a new user called ‘SWX’ and grant few privileges to it

create user swx IDENTIFIED by Abcde1234$## ;

grant create session to swx;
grant dwrole to swx;
GRANT UNLIMITED TABLESPACE TO swx;

  1. Create Credentials to run the Import using DBMS_CLOUD package

BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => ‘SWX’,
username => ‘SWX’,
password => ‘*********’
);
END;
/

— If you need to re-create, drop the old one and add it again —
BEGIN
DBMS_CLOUD.drop_credential(credential_name => ‘ABC’);
END;
/

  1. Run the Import using impdp command

impdp admin@easyoradba_high directory=data_pump_dir credential=swx dumpfile= https://objectstorage.ap-sydney-1.oraclecloud.com/p/YcRteyozj7l7EpFfu6Zr1TWjw7mYeM97JBL96VASXsM/n/sdpxrcjhpsnk/b/easyoradba-migrate/o/exp_swx.dmp parallel=16 remap_tablespace=swx:data encryption_pwd_prompt=yes partition_options=merge transform=segment_attributes:n transform=dwcs_cvt_iots:y exclude=cluster,indextype,materialized_view,materialized_view_log,materialized_zonemap,db_link schemas=swx

It will ask for the encryption password which we created in step 7.

The import is now completed and all the objects of on-premise Source schema SWX are now under SWX schema in Target Oracle ATP

The post High level Steps to Migrate from On-Premise 11g(11.2.0.4) to Oracle 18c Autonomous Database using Export/Import appeared first on EasyOraDBA.

Connect to Oracle Cloud VM with SSH Public-Private Key Pair

$
0
0

https://docs.cloud.oracle.com/iaas/Content/Compute/Tasks/managingkeypairs.htm
https://docs.cloud.oracle.com/iaas/Content/Compute/Tasks/accessinginstance.htm

  • Generate a local private and public keyfile on the client machine
$ ssh-keygen -o
$ cd .ssh/
$ ls -ltrh
total 12K
-rw-------. 1 opc opc 409 Jan 11 07:05 authorized_keys
-rw-r--r--. 1 opc opc 399 Jan 11 07:19 id_rsa.pub
-rw-------. 1 opc opc 1.8K Jan 11 07:19 id_rsa
$ chmod 400 id_rsa
$ ls -ltrh
total 12K
-rw-------. 1 opc opc 409 Jan 11 07:05 authorized_keys
-rw-r--r--. 1 opc opc 399 Jan 11 07:19 id_rsa.pub
-r--------. 1 opc opc 1.8K Jan 11 07:19 id_rsa
  • Upload ‘id_rsa.pub’ file while creating the instance or paste the ssh keys by pasting contents of public key file
    $ cat id_rsa.pub
  • Connect to the VM from your client machine by using the private key file in .ssh/ directory
ssh -i ~/.ssh/id_rsa opc@10.0.2.3


The post Connect to Oracle Cloud VM with SSH Public-Private Key Pair appeared first on EasyOraDBA.

Migrate 12.1 DBCS to ADW 19c using Data Pump

$
0
0

Architecture:

Source : DBCS on Classic 12.1 Enterprise Edition

Target : OCI gen2 Autonomous Data Warehouse 19c


Pre-Req:
———
1.  Install and Configure ocicli as ‘root’
 Ref : https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm
 
 $ bash -c “$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)”

2. Source DBCS classic DB provisioned : version 12.1.0.2 Emeteprise

3. Target ADW 19c version created

4. SQL developer installed on your local client machine to do the configuration part of ADW

5. Access to DBCS instance to run the export

6. Intermediate instance to run the impdp for Autonomous, you can install newer version on dbcs instance but for production it would be better to do from another instance


High-Level Steps:

—————–

  1. Export Dump File to Local Filesystem of DBCS Instance
  2. Install and Configure ocicli client on DBCS instance
  3. Run Multipart upload to Migration Object bucket, for larger files we can explore rclone or tsunami udp or even DTS
  4. Install 19c client on intermediary instance to run the import
  5. Copy client credentials file of ADW to intermediary instance and run the impdp to ADW


Detailed Steps:

—————–

1  — Install Oracle 19c Client with Image Method —

Reference : https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=514763738647331&parent=EXTERNAL_SEARCH&sourceId=HOWTO&id=885643.1&_afrWindowMode=0&_adf.ctrl-state=o3w0ut0xp_4

Download 19c client from : https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html

mkdir -p /home/opc/client/oracle/orainventory

mkdir -p /home/opc/client/oracle/19.3.0

unzip LINUX.X64_193000_client_home.zip

cd /home/opc/client/oracle/19.3.0

vim ./inventory/response/client_install.rsp # add the required parameters

chmod 600 ./inventory/response/client_install.rsp

Silent install
[opc@cloudinstance:~/client/oracle/19.3.0]$ ./runInstaller -silent -responseFile /home/opc/client/oracle/19.3.0/inventory/response/client_install.rsp



2 — With oracle user on DBCS —

[oracle@Shadab-Migrate ~]$ sqlplus “/as sysdba”

SQL*Plus: Release 12.1.0.2.0 Production on Fri May 8 04:53:08 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Oracle Label Security and Real Application Testing options

SQL> create or replace directory export_dir as ‘/home/oracle/migratedump’;

Directory created.

SQL> grant read,write on directory export_dir to public;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production

$ expdp dumpfile=migratdumpfile.dmp logfile=migratedumpfile.log directory=export_dir full=y parallel=8



3 — With root user on DBCS —

$ oci os object put –namespace ocicpm -bn Shadab-DB-Migrate –file /home/oracle/migratedump/migratdumpfile.dmp  –part-size 1 –parallel-upload-count 8

Upload ID: 919adb9c-d89e-601b-e40e-912d028c95fd
Split file into 3 parts for upload.
Uploading object  [####################################]  100%
{
  “etag”: “309a3651-7950-4122-8dce-e56ec9087433”,
  “last-modified”: “Fri, 08 May 2020 05:12:45 GMT”,
  “opc-multipart-md5”: “hVrtNHpCbxdMscoSpcquPg==-3”
}

3 Go to your Bucket and create pre-authenticated URL for read for your dump file : https://objectstorage.ap-sydney-1.oraclecloud.com/n/ocicpm/b/Shadab-DB-Migrate/o/migratdumpfile.dmp

copy the pre-authenticated request (PAR) :
https://objectstorage.ap-sydney-1.oraclecloud.com/p/N637dPGDlv0cnwUHGPFwMiNvJhugACKWN8qZKBmCbAU/n/ocicpm/b/Shadab-DB-Migrate/o/migratdumpfile.dmp




4 — Import Configuration for ADW from SQL Developer —

create user migratetest IDENTIFIED by Abcde1234$## ;

grant create session to migratetest;

grant dwrole to migratetest;

GRANT UNLIMITED TABLESPACE TO migratetest;

/* I m running my import with admin but ideally you should run it with another user like the one i created above */

/* Create Credentials to run the Import using DBMS_CLOUD package

begin
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name=> ‘ADWCS_CREDS’,
username => ‘admin’,
password => ‘*********’
);
end;
/
 

 — If you need to re-create, drop the old one and add it again —

 begin
DBMS_CLOUD.DROP_CREDENTIAL(credential_name=> ‘ADWCS_CREDS’);
end;
/
 




5 — Instance from Step 1 to run the import…. Copy your credentials wallet file and store it on your $ORACLE_HOME/wallet directory —

$ mkdir -p  /home/opc/client/oracle/19.3.0/wallet

— Copy the wallet file from your Autonomous Database to the instance —
mv Wallet_PAYGDEV.zip /home/opc/client/oracle/19.3.0/wallet

— Set below parameters in bash profile —

export ORACLE_HOME=/home/opc/client/oracle/19.3.0
export ORACLE_BASE=/home/opc/client/oracle
export TNS_ADMIN=/home/opc/client/oracle/19.3.0/wallet

cd /home/opc/client/oracle/19.3.0/wallet

unzip Wallet_PAYGDEV.zip

$ vim sqlnet.ora

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY=”/home/opc/client/oracle/19.3.0/wallet”)))
SSL_SERVER_DN_MATCH=yes

$ tnsping paygdev_high

Used TNSNAMES adapter to resolve the alias
Attempting to contact (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.ap-sydney-1.oraclecloud.com))(connect_data=(service_name=lhxjqheyshrvs27_paygdev_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn=CN=adb.ap-sydney-1.oraclecloud.com,OU=Oracle ADB SYDNEY,O=Oracle Corporation,L=Redwood City,ST=California,C=US)))
OK (60 msec)

$ sqlplus admin/*********@paygdev_high

select * from database_properties where property_name=’DEFAULT_CREDENTIAL’;

Important : See MOS note before proceeding : https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=518062209629308&parent=EXTERNAL_SEARCH&sourceId=PROBLEM&id=2416814.1&_afrWindowMode=0&_adf.ctrl-state=9i6ti0fxv_4

impdp admin/*********@paygdev_high directory=data_pump_dir credential=ADWCS_CREDS dumpfile=https://objectstorage.ap-sydney-1.oraclecloud.com/p/N637dPGDlv0cnwUHGPFwMiNvJhugACKWN8qZKBmCbAU/n/ocicpm/b/Shadab-DB-Migrate/o/migratdumpfile.dmp parallel=16  encryption_pwd_prompt=yes partition_options=merge transform=segment_attributes:n transform=dwcs_cvt_iots:y exclude=cluster,indextype,materialized_view,materialized_view_log,materialized_zonemap,db_link full=y  

The post Migrate 12.1 DBCS to ADW 19c using Data Pump appeared first on EasyOraDBA.

Connect to Autonomous Database Private Endpoint from On-Premise SQL Developer using SSH Local Port Forwarding

$
0
0

Connect to Autonomous Database Private Endpoint from On-Premise SQL Developer using SSH Local Port Forwarding

Assumptions:

  1. Your Bastion/ServiceVM/Bastion Host is “140.x.x.16” using Private Key “/Users/shadab/Downloads/Oracle Content/Keys/mydemo_vcn.priv” which is copied and available on the on-premise client machine
  2. Your Autonomous Database Private Endpoint IP is “10.10.2.11” running over port 1522
  3. There is connectivity from Bastion Host “140.x.x.16” to Autonomous Database “10.10.2.11” running over port 1522
  4. Your Autonomous Database wallet zip file is available on the on-premise client machine

Connect :
— From the On-Premise Client Machine Add the Hostname of the Autonomous Database which is in the tnsnames.ora file from your wallet file —

eg:
atpocipaas_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=hogmsumb.adb.ap-sydney-1.oraclecloud.com))(connect_data=(service_name=pjjuahqavguuilt_atpocipaas_low.atp.oraclecloud.com))(security=(ssl_server_cert_dn=”CN=adb.ap-sydney-1.oraclecloud.com,OU=Oracle ADB SYDNEY,O=Oracle Corporation,L=Redwood City,ST=California,C=US”)))

$ sudo vi /etc/hosts
127.0.0.1 hogmsumb.adb.ap-sydney-1.oraclecloud.com localhost

Now create a SSH tunnel with local port forwarding to forward local port 1522 to remote host 10.10.2.11 and remote port 1522

$ ssh -fNT -v -L 1522:10.10.2.11:1522 opc@140.x.x.16 -i “/Users/shadab/Downloads/Oracle Content/Keys/mydemo_vcn.priv”

Check with telnet

$ telnet hogmsumb.adb.ap-sydney-1.oraclecloud.com 1522
Trying 127.0.0.1…
debug1: Connection to port 1522 forwarding to 10.10.2.11 port 1522 requested.
debug1: channel 3: new [direct-tcpip]
Connected to hogmsumb.adb.ap-sydney-1.oraclecloud.com.
Escape character is ‘^]’.

Now connect with SQL Developer using option ‘cloud wallet’ using any one of the TNS entry and using the “ADMIN” user with which you created the Autonomous Database or any other DB user

The post Connect to Autonomous Database Private Endpoint from On-Premise SQL Developer using SSH Local Port Forwarding appeared first on EasyOraDBA.

Oracle RDS Performance Tuning Queries

$
0
0
a) database size

select
'===========================================================' || chr(10) ||
'Total Physical Size = ' || round(redolog_size_gb+dbfiles_size_gb+tempfiles_size_gb+archlog_size_gb+ctlfiles_size_gb,2) || ' GB' || chr(10) ||
'===========================================================' || chr(10) ||
' Redo Logs Size : ' || round(redolog_size_gb,3) || ' GB' || chr(10) ||
' Data Files Size : ' || round(dbfiles_size_gb,3) || ' GB' || chr(10) ||
' Temp Files Size : ' || round(tempfiles_size_gb,3) || ' GB' || chr(10) ||
' Archive Log Size : ' || round(archlog_size_gb,3) || ' GB' || chr(10) ||
' Control Files Size : ' || round(ctlfiles_size_gb,3) || ' GB' || chr(10) ||
'===========================================================' || chr(10) ||
'Actual Database Size = ' || db_size_gb || ' GB' || chr(10) ||
'===========================================================' || chr(10) ||
' Used Database Size : ' || used_db_size_gb || ' GB' || chr(10) ||
' Free Database Size : ' || free_db_size_gb || ' GB' as summary
from (
select sys_context('USERENV', 'DB_NAME') db_name
,(select sum(bytes)/1024/1024/1024 redo_size from v$log ) redolog_size_gb
,(select sum(bytes)/1024/1024/1024 data_size from dba_data_files ) dbfiles_size_gb
,(select nvl(sum(bytes),0)/1024/1024/1024 temp_size from dba_temp_files ) tempfiles_size_gb
,(select sum(bytes)/1024/1024/1024 from v$log where sequence# in (select sequence# from v$loghist)) archlog_size_gb
,(select sum(block_size*file_size_blks)/1024/1024/1024 controlfile_size from v$controlfile) ctlfiles_size_gb
,round(sum(used.bytes)/1024/1024/1024,3) db_size_gb
,round(sum(used.bytes)/1024/1024/1024,3) - round(free.f/1024 /1024/ 1024) used_db_size_gb
,round(free.f/1024/1024/1024,3) free_db_size_gb
from (select bytes from v$datafile
union all
select bytes from v$tempfile) used
,(select sum(bytes) as f from dba_free_space) free
group by free.f);

Note: archlog_size is not for physical archived log size,it's just saying the archived log size for those in redo log with 'YES' in ARC column, the actual physical archive log file size depends on your retention hours setting below

SQL> set serveroutput on
SQL> exec rdsadmin.rdsadmin_util.show_configuration;
NAME:archivelog retention hours
VALUE:24
DESCRIPTION:ArchiveLog expiration specifies the duration in hours before archive/redo log files are automatically deleted.
NAME:tracefile retention
VALUE:10080
DESCRIPTION:tracefile expiration specifies the duration in minutes before tracefiles in bdump are automatically deleted.

select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR'));

note: filesize in rds_file_util.listdir is in bytes.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Concepts.Oracle.html

SQL> alter session set NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';

Session altered.

SQL> select filename, type, (filesize/1024/1024) filesize, mtime from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by 1;

FILENAME                                                                                         TYPE          FILESIZE MTIME
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------- ---------- -------------------
11gexp101.dmp                                                                                         file         30.390625 2015-11-02 16:08:47
11gexp102.dmp                                                                                         file             0 2015-11-02 16:08:51
11gexp201.dmp                                                                                         file        25.2226563 2015-11-02 16:09:02
3 rows selected.

Scenario 0: check what's running on database

SELECT username, seconds_in_wait, machine, port, terminal, program, module, service_name FROM v$session where Status='ACTIVE' AND UserName IS NOT NULL;

 

Scenario 1: Resource Intensive SQL Statements, CPU or Disk

"100% CPU alone not an indication of a problem and can indicate an optimal state"

All virtual memory servers are designed to drive CPU to 100% as soon as possible.

This just means that the CPUs are working to their full potential. The only metric that identifies a CPU bottleneck is when the run queue (r value) exceeds the number of CPUs on the server.

# show cpu usage for active sessions

SET PAUSE ON
SET PAUSE 'Press Return to Continue'
SET PAGESIZE 60
SET LINESIZE 300

COLUMN username FORMAT A30
COLUMN sid FORMAT 999,999,999
COLUMN serial# FORMAT 999,999,999
COLUMN "cpu usage (seconds)"  FORMAT 999,999,999.0000

SELECT
  s.username,
  t.sid,
  s.serial#,
  SUM(VALUE/100) as "cpu usage (seconds)"
FROM
  v$session s,
  v$sesstat t,
  v$statname n
WHERE
  t.STATISTIC# = n.STATISTIC#
AND
  NAME like '%CPU used by this session%'
AND
  t.SID = s.SID
AND
  s.status='ACTIVE'
AND
  s.username is not null
GROUP BY username,t.sid,s.serial#
/

# monitor the near real-time resource consumption of SQL queries sorted by cpu_time

select * from (
select
a.sid session_id
,a.sql_id
,a.status
,a.cpu_time/1000000 cpu_sec
,a.buffer_gets
,a.disk_reads
,b.sql_text sql_text
from v$sql_monitor a
,v$sql b
where a.sql_id = b.sql_id
order by a.cpu_time desc)
where rownum <=20;

# same as above, you can monitor currently executing queries ordered by the number of disk reads

select * from (
select
a.sid session_id
,a.sql_id
,a.status
,a.cpu_time/1000000 cpu_sec
,a.buffer_gets
,a.disk_reads
,substr(b.sql_text,1,15) sql_text
from v$sql_monitor a
,v$sql b
where a.sql_id = b.sql_id
and
a.status='EXECUTING'
order by a.disk_reads desc)
where rownum <=20;

Note: V$SQL_MONITOR view is only available with Oracle Database 11g or higher.

# find the worst queries

select b.username username, a.disk_reads reads, a.executions exec, a.disk_reads /decode (a.executions, 0, 1, a.executions) rds_exec_ratio, a.sql_text Statement from v$sqlarea a, dba_users b where a.parsing_user_id = b.user_id and a.disk_reads > 100000 order by a.disk_reads desc;

Note: the disk_reads columns can be replaced with the buffer_gets column to provide information on SQL statements requiring the largest amount of memory.

# find the worse queries 2

select snap_id, disk_reads_delta reads_delta, executions_delta exec_delta, disk_reads_delta /decode (executions_delta, 0, 1,executions_delta) rds_exec_ratio, sql_id from dba_hist_sqlstat where disk_reads_delta > 10000 order by dsik_reads_delta desc;

SNAP_ID READS_DELTA EXEC_DELTA RDS_EXEC_RATIO SQL_ID
---------- ----------- ---------- -------------- -------------
      1937    106907        1      106907 b6usrg82hwsa3
      1913     67833        1       67833 b6usrg82hwsa3
      1889     54370        1       54370 b6usrg82hwsa3

# get query text

SQL> select command_type,sql_text from dba_hist_sqltext where sql_id = 'b6usrg82hwsa3';

COMMAND_TYPE
------------
SQL_TEXT
--------------------------------------------------------------------------------
     170
call dbms_stats.gather_database_stats_job_proc (  )

# get explain for the query

SQL> select * from table(dbms_xplan.display_awr('66a40jcr7a4u3'));

# the top 10 CPU intensive queries
select *
from (select a.username, b.sid, a.serial#, a.osuser, a.program, a.sql_id, a.state, value/100 cpu_usage_sec , a.event
from v$session a
,v$sesstat b
,v$statname c
where b.statistic# = c.statistic#
and name like '%CPU used by this session%'
and b.sid = a.sid
order by value desc)
where rownum <=10;

# To aggregate above output
select username, osuser, module, sum(cpu_usage_sec) cpu_usage_sec
from (select a.username, b.sid, a.osuser, a.program, a.sql_id, a.state, value/100 cpu_usage_sec, a.event, a.module
from v$session a
,v$sesstat b
,v$statname c
where b.statistic# = c.statistic#
and name like '%CPU used by this session%'
and b.sid = a.sid)
group by username, osuser, module
order by cpu_usage_sec desc;

#you can display CPU for any Oracle user session with this script
select
   ss.username,
   se.SID,
   VALUE/100 cpu_usage_seconds
from
   v$session ss,
   v$sesstat se,
   v$statname sn
where
   se.STATISTIC# = sn.STATISTIC#
and
   NAME like '%CPU used by this session%'
and
   se.SID = ss.SID
and
   ss.status='ACTIVE'
and
   ss.username is not null
 order by VALUE desc;

USERNAME                  SID CPU_USAGE_SECONDS
------------------------------ ---------- -----------------
SYSMAN                       64          20.17
DBSNMP                       59           3.92
AWS                       43        .16

Scenario 2: Long Running Queries and Open Transactions

# Check Open Transactions, login as sys or system from sqlplus

select * from gv$transaction;

# in past 1 day, run time more than 2000s

set wrap off

col elapsed_time_delta format 9999999999

col plan_hash_value    format 9999999999

col seconds            format 99999

col executions_total   format 99999999


select 

   stat.sql_id, 	

   plan_hash_value, 

   rpad(parsing_schema_name,10) "schema",elapsed_time_total/1000000 "seconds",  	

   elapsed_time_delta,disk_reads_delta,

   stat.executions_total,

   to_char(ss.end_interval_time,'dd-mm-yy hh24:mi:ss') "endtime",  

   rpad(sql_text,140) text,ss.snap_id

from 

   dba_hist_sqlstat  stat, 

   dba_hist_sqltext  txt, 

   dba_hist_snapshot ss


where 

   stat.sql_id = txt.sql_id 

and 

   stat.dbid = txt.dbid 

   

and 

   ss.dbid = stat.dbid 

and 

   ss.instance_number = stat.instance_number 

and 

   stat.snap_id = ss.snap_id 

and 

   parsing_schema_name not like 'sys%'  

and 

   ss.begin_interval_time >= sysdate-1

and 

   stat.elapsed_time_total/1000000 > 2000

order by 

   elapsed_time_total desc;


# Finding long operations (e.g. full table scans). If it is because of lots of short operations, nothing will show up.

COLUMN percent FORMAT 999.99

SELECT sid, to_char(start_time,'hh24:mi:ss') stime,
message,( sofar/totalwork)* 100 percent
FROM v$session_longops
WHERE sofar/totalwork < 1
/

# long query , run more than 60s, active, not background type.
select s.username,s.type,s.sid,s.serial#,s.last_call_et seconds_running,q.sql_text from v$session s
join v$sql q
on s.sql_id = q.sql_id
 where status='ACTIVE'
 and type <> 'BACKGROUND'
 and last_call_et> 60
order by sid,serial#;

---------------
# Queries currently running for more than 60 seconds. Note that it prints multiple lines per running query if the SQL has multiple lines.

select s.username,s.sid,s.serial#,s.last_call_et/60 mins_running,q.sql_text from v$session s
join v$sqltext_with_newlines q
on s.sql_address = q.address
 where status='ACTIVE'
and type <>'BACKGROUND'
and last_call_et> 60
order by sid,serial#,q.piece

# logon within last 4 hours, still active within last half hour

SELECT s.sid, s.serial#, p.spid as "OS PID", s.username, s.module, st.value/100 as "DB Time (sec)"
, stcpu.value/100 as "CPU Time (sec)", round(stcpu.value / st.value * 100,2) as "% CPU"
FROM v$sesstat st, v$statname sn, v$session s, v$sesstat stcpu, v$statname sncpu, v$process p
WHERE sn.name = 'DB time' -- CPU
AND st.statistic# = sn.statistic#
AND st.sid = s.sid
AND  sncpu.name = 'CPU used by this session' -- CPU
AND stcpu.statistic# = sncpu.statistic#
AND stcpu.sid = st.sid
AND s.paddr = p.addr
AND s.last_call_et < 1800 -- active within last 1/2 hour
AND s.logon_time > (SYSDATE - 240/1440) -- sessions logged on within 4 hours
AND st.value/100 > 30 order by st.value;

# ordered by time_waited for sessions logon within last 4 hours, and still active within last half an hour, and event is db file sequential read.

SELECT s.sid, s.serial#, p.spid as "OS PID", s.username, s.module, se.time_waited
FROM v$session_event se, v$session s, v$process p
WHERE se.event = 'db file sequential read'
AND s.last_call_et < 1800 -- active within last 1/2 hour
AND s.logon_time > (SYSDATE - 240/1440) -- sessions logged on within 4 hours
AND se.sid = s.sid
AND s.paddr = p.addr
ORDER BY se.time_waited;
 

Scenario 3:  Locking Sessions

# check which statement is blocking others

select s1.username blkg_user, s1.machine blkg_machine,s1.sid blkg_sid, s1.serial# blkg_serialnum,s1.process blkg_OS_PID,substr(b1.sql_text,1,50) blkg_sql,chr(10),s2.username
wait_user, s2.machine wait_machine,s2.sid wait_sid, s2.serial# wait_serialnum ,s2.process wait_OS_PID ,substr(w1.sql_text,1,50) wait_sql,lo.object_id blkd_obj_id,do.owner obj_own, do.object_name obj_name from v$lock l1,v$session s1,v$lock l2,v$session s2 ,v$locked_object lo,v$sqlarea b1,v$sqlarea w1,dba_objects do
where s1.sid = l1.sid and s2.sid = l2.sid and l1.id1 = l2.id1 and s1.sid = lo.session_id and lo.object_id = do.object_id
and l1.block = 1 and s1.prev_sql_addr = b1.address and s2.sql_address = w1.address and l2.request > 0;

 

# This shows locks. Sometimes things are going slow as it is blocked waiting for a lock:

select process,sid, blocking_session from v$session where blocking_session is not null;

or

select object_name,  object_type,  session_id,  type,   lmode,  request,  block,  ctime  from v$locked_object, all_objects, v$lock
where v$locked_object.object_id = all_objects.object_id AND  v$lock.id1 = all_objects.object_id AND  v$lock.sid = v$locked_object.session_id order by  session_id, ctime desc, object_name;

or

SELECT B.Owner, B.Object_Name, A.Oracle_Username, A.OS_User_Name FROM V$Locked_Object A, All_Objects B WHERE A.Object_ID = B.Object_ID;

Scenario 4:  Detecting undo block changes

• ORA-01555: snapshot too old
• ORA-30036: unable to extend segment by ... in undo tablespace 'UNDOTBS1'

# Run the query multiple times and examine the delta between each occurrence of BLOCK_CHANGES. Large deltas indicate high redo generation by the session

SELECT s.sid, s.serial#, s.username, s.program,
 i.block_changes
  FROM v$session s, v$sess_io i
  WHERE s.sid = i.sid and block_changes > 1000
  ORDER BY 5 desc;

Note: this view v$sess_io contains the column BLOCK_CHANGES which indicates how much blocks have been changed by the session. High values indicate a session generating lots of redo.

SELECT s.sid, s.serial#, s.username, s.program,
 t.used_ublk, t.used_urec
FROM v$session s, v$transaction t
 WHERE s.taddr = t.addr
  ORDER BY 5 desc;

Note: V$TRANSACTION. This view contains information about the amount of undo blocks and undo records accessed by the transaction (as found in the USED_UBLK and USED_UREC columns).

#If you want to view the SQL statement associated with a user consuming undo space, then join to V$SQL as shown below
 
select s.sid, s.serial#, s.osuser, s.logon_time, s.status
,s.machine, t.used_ublk
,t.used_ublk*16384/1024/1024 undo_usage_mb
,q.sql_text
from v$session
s
,v$transaction t
,v$sql q
where t.addr = s.taddr
and s.sql_id = q.sql_id;

# To identify which users are consuming space in the undo tablespace. Run this query to report on basic information regarding space allocated on a per user basis, including sql statement associated with the user


select
s.sid
,s.serial#
,s.osuser
,s.logon_time
,s.status
,s.machine
,t.used_ublk
,t.used_ublk*16384/1024/1024 undo_usage_mb
,q.sql_text
from v$session
s
,v$transaction t
,v$sql
q
where t.addr = s.taddr
and s.sql_id = q.sql_id;

# pinpoint which users are responsible for space allocated within the undo tablespace

select
s.sid
,s.serial#
,s.username
,s.program
,r.name undo_name
,rs.status
,rs.rssize/1024/1024 redo_size_mb
,rs.extents
from v$session
s
,v$transaction t
,v$rollname
r
,v$rollstat
rs
where s.taddr = t.addr
and t.xidusn = r.usn
and r.usn
= rs.usn;

# The query checks for issues with the undo tablespace that have occurred within the last day

select
to_char(begin_time,'MM-DD-YYYY HH24:MI') begin_time
,ssolderrcnt
ORA_01555_cnt    -- number of times got snapshot too old error
,nospaceerrcnt no_space_cnt  -- number of times space was requested in the undo tablespace
but none was to be found. If the NO_SPACE_CNT is reporting a non-zero value, you may need to add more
space to your undo tablespace.
,txncount
max_num_txns
,maxquerylen
max_query_len
,expiredblks
blck_in_expired
from v$undostat
where begin_time > sysdate - 1
order by begin_time;

Scenario 5: Checking Memory Usage

# a quick check of PGA and UGA memory grouping by Session Status, can tell you how much memory is being used by INACTIVE sessions

select status, round(total_user_mem/1024,2) mem_used_in_kb, round(100 * total_user_mem/total_mem,2) mem_percent
from (select b.status, sum(value) total_user_mem
       from sys.v_$statname c
               ,sys.v_$sesstat a
                   ,sys.v_$session b
                   ,sys.v_$bgprocess p
      where a.statistic#=c.statistic#
            and p.paddr (+) = b.paddr
                and b.sid=a.sid
                and c.name in ('session pga memory','session uga memory')
      group by b.status)
   ,(select sum(value) total_mem
      from sys.v_$statname c
              ,sys.v_$sesstat a
     where a.statistic#=c.statistic#
           and c.name in ('session pga memory','session uga memory'))
order by 3 desc;

# Connections count - dedicated server or active sessions

SELECT server, count(*) FROM v$session group by server;

SELECT status, count(*) FROM v$session group by status;

# Dynamic memory usage
SELECT component , current_size, user_specified_size FROM V$MEMORY_DYNAMIC_COMPONENTS WHERE current_size > 0;

Scenario 6: Open Cursors Monitoring

ORA-01000: maximum open cursors exceeded

# check the number of open cursors each session has opened, list the first 20 results

select * from (
select
a.value
,c.username
,c.machine
,c.sid
,c.serial#
from v$sesstat a
,v$statname b
,v$session c
where a.statistic# = b.statistic#
and
c.sid
= a.sid
and
b.name
= 'opened cursors current'
and
a.value
!= 0
and
c.username IS NOT NULL
order by 1 desc,2)
where rownum < 21;

Note: check 'show parameter open_cursors;' to show open_cursors settings, if a session is using like 1000 cursors, probably it's due to application code not closing open cursors

Scenario 7: Check SQL that Consuming Temporary Space

# to view the space a session is using in the temporary tablespace

SELECT
s.sid
,s.serial#
,s.username
,p.spid
,s.module
,p.program
,SUM(su.blocks) * tbsp.block_size/1024/1024 mb_used
,su.tablespace
FROM v$sort_usage
su
,v$session
s
,dba_tablespaces tbsp
,v$process
p
WHERE su.session_addr = s.saddr
AND
su.tablespace
= tbsp.tablespace_name
AND
s.paddr
= p.addr
GROUP BY
s.sid, s.serial#, s.username, s.osuser, p.spid, s.module,
p.program, tbsp.block_size, su.tablespace
ORDER BY s.sid;


# ORA-1652: unable to extend temp segment by 128 in tablespace

make sure it's auto grow and no maximum size limit for temp files

0) select * from v$tempfile   # check initial creation size and file path etc

1) select * from dba_temp_free_space;
2) select autoextensible from dba_temp_files;
3) select file_name,maxbytes/1024/1024/1024 GB from dba_temp_files;  # by default, maxbytes should be 32TB.

SQL> select file_name,maxbytes/1024/1024/1024 GB from dba_temp_files;

FILE_NAME                                                                          MAXBYTES/1024/1024/1024
/rdsdbdata/db/ORCL_A/datafile/o1_mf_temp02_btrglkq4_.tmp                                                       10
/rdsdbdata/db/ORCL_A/datafile/o1_mf_temp03_bywq42rz_.tmp                                                    32768

4) SQL> create temporary tablespace temp08 tempfile size 180m autoextend on next 20m maxsize 500m;

Tablespace created 

SQL> set head on
SQL> select * from dba_temp_files where tablespace_name='TEMP08'

FILE_NAME                                                                                             FILE_ID TABLESPACE_NAME             BYTES       BLOCKS STATUS  RELATIVE_FNO AUT   MAXBYTES  MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
/rdsdbdata/db/ORCL_A/datafile/o1_mf_temp08_byxhhbt2_.tmp                                                                   7 TEMP08                 188743680        23040 ONLINE      1024 YES  524288000       64000     2560  187695104       22912

# note:  increment_by : 2560 blocks * 8k/block= 20480k = 20m, 180m is physical OS file size

Scenario 8: AWR/ADDM/ASH/Statspack

If your RDS Oracle instance is Enterprise Edition, you can use AWR/ADDM/ASH, othewise, you can enable option group statspack.

# How to run AWR/ADDM/ASH for your RDS Oracle EE

There are 3 options to run AWR/ADDM/ASH:

    If you have installed Oracle full cleint on your EC2 instance, you can run them after using sqlplus to connect to RDS Oracle instance as follows

SQL> @?/rdbms/admin/awrrpt

# check history snapshots for AWR
select snap_id, DBID,INSTANCE_NUMBER,BEGIN_INTERVAL_TIME, END_INTERVAL_TIME from dba_hist_snapshot order by snap_id;

SQL> @?/rdbms/admin/addmrpt

SQL> @?/rdbms/admin/ashrpt

    using SQL Developer DBA tool as shown below as screenshot

     Using OEM Web GUI if OEM is enabled
     Using Statspack - https://megamind.amazon.com/node/3844

Create, View and Delete Snapshots

sqlplus perfstat/perfstat
SQL> exec statspack.snap;
SQL> select name,snap_id,to_char(snap_time,'DD.MM.YYYY:HH24:MI:SS') "Date/Time" from stats$snapshot,v$database;

SQL> @?/rdbms/admin/spreport.sql

 

http://www.akadia.com/services/ora_statspack_survival_guide.html

 
Scenario 9: check if the redo log file size is too small

set lines 120; 

set pages 999; 

SELECT 

to_char(first_time,'YYYY-MON-DD') day,

to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') "00",

to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",

to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",

to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",

to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",

to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",

to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",

to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",

to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",

to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",

to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",

to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",

to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",

to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",

to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",

to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",

to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') "16",

to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",

to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') "18",

to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",

to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",

to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",

to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",

to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23"

from

   v$log_history

GROUP by 

   to_char(first_time,'YYYY-MON-DD')

order by 

   to_char(first_time,'YYYY-MON-DD');

The post Oracle RDS Performance Tuning Queries appeared first on EasyOraDBA.

Create Ongoing Replication for Oracle On-Premise to Amazon Redshift using DMS

$
0
0

You have essentially 2 options if you want to replicate data from on-premise Oracle database to Redshift :

Option 1 : Use a AWS service for migrating databases to AWS cloud called DMS and SCT. DMS stands for Database Migration Service and is a simple, cost-effective and easy to use service. There is no need to install any drivers or applications, and it does not require changes to the source database in most cases. You can begin a database migration with just a few clicks in the AWS Management Console.

AWS SCT stands for schema conversion tool. AWS Schema Conversion Tool makes heterogeneous database migrations predictable by automatically converting the source database schema and a majority of the database code objects, including views, stored procedures, and functions, to a format compatible with the target database. (Ref: https://aws.amazon.com/dms/schema-conversion-tool/)

Option 2: Using Non-AWS options like Oracle Golden Gate, Attunity, Alooma etc. I will focus on Option 1 which is AWS native database migration service.

To migrate an Oracle database from on-premise and continue the replication you need to do a setup of the network on-premise side and as well as on AWS. Then you need to provision the DMS and Redshift infrastructure and setup the replication tasks. Let us break the DMS migration steps into 3 broad stages and the associated links in our public documentation of how it can be achieved.

====================================================================

Stage 1 : Network Setup for DMS On-Premise to AWS VPC

If your On-Premise Oracle Database is not publicly available then you will have to use either Direct connect or VPN. Remote networks can connect to a VPC using several options such as AWS Direct Connect or a software or hardware VPN. If you don’t use a VPN or AWS Direct Connect to connect to AWS resources, you can use the internet to migrate a database to an Amazon Redshift cluster.

As part of the network to use for database migration, you need to specify what subnets in your Amazon Virtual Private Cloud (Amazon VPC) you plan to use. A subnet is a range of IP addresses in your VPC in a given Availability Zone. These subnets can be distributed among the Availability Zones for the AWS Region where your VPC is located.

You create a replication instance in a subnet that you select, and you can manage what subnet a source or target endpoint uses by using the AWS DMS console.

Please see the below links for more information on the Network setup for DMS

Link 1 : https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html#CHAP_ReplicationInstance.VPC.Configurations.ScenarioDirect

Link 2 : https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.html

====================================================================

Stage 2 : Creation of DMS Replication Instance, Endpoints and Redshift Infrastructure

a) Create Replication Instance
AWS DMS always creates the replication instance in a VPC based on Amazon Virtual Private Cloud (Amazon VPC). You specify the VPC where your replication instance is located. You can use your default VPC for your account and AWS Region, or you can create a new VPC. The VPC must have two subnets in at least one Availability Zone.

Link 3 : https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReplicationInstance.VPC.html

b) Before creating the replication endpoint ensure that your Redshift cluster is created and it has all the necessary security group permissions for your DMS instance to access it. If the Redshift cluster is in a different VPC then you will have to do VPC peering.

Link 4 : https://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-redshift-cluster.html

Link 5 : https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html

c) Create the Endpoints for DMS
An endpoint provides connection, data store type, and location information about your data store. AWS Database Migration Service uses this information to connect to a data store and migrate data from a source endpoint to a target endpoint.

Link 6 : https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Endpoints.html

You will need to create 2 endpoints. 1 source endpoint for Oracle and 1 target endpoint for Redshift.

Link 7 : https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Configuration

Link 8 : https://docs.aws.amazon.com/dms/latest/sbs/CHAP_RDSOracle2Redshift.html

It is advisable to run AWS SCT and create a project for Oracle to Redshift. It will give you more idea about what are the differences between Oracle and Redshift and if you need to do any manual steps on Redshift side before or after the replication. If you are doing just table replication then you can skip AWS SCT part. My advise would be to do multiple dry runs with a small pre-prod Oracle database first and see if you are missing any objects/features in Redshift side.

Now once this stage is completed you essentially have the Network, DMS and Redshift infrastructure ready to start the replication from Oracle On-Premise to Redshift.

====================================================================

Stage 3 : Create Replication Task for replicating data in near realtime from On-Premise Oracle to Redshift

This is the final stage of configuration for Oracle to Redshift replication using DMS. In previous 2 stages we already setup the infra. We now have one replication DMS instance, two endpoints – one is your source on-premise Oracle database and the second is the destination Redshift cluster on AWS.

Now the final part is to configure the replication task. But before that we need ensure that DMS uses CDC to capture the changes in Oracle side, it is a more efficient method to run an ongoing replication. Default method of the Oracle source is logminer, so you need to enable supplemental logging on Oracle side

Enable Supplemental Logging for Oracle:
Link 9 : Normal – https://docs.oracle.com/database/121/SUTIL/GUID-D2DDD67C-E1CC-45A6-A2A7-198E4C142FA3.htm#SUTIL1583
Link 10 : RDS – https://docs.aws.amazon.com/dms/latest/sbs/CHAP_On-PremOracle2Aurora.Steps.ConfigureOracle.html

SQL> alter database force logging;
Database altered.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
Database altered.

User Account Privileges Required on a Self-Managed Oracle Source for AWS DMS :
GRANT SELECT ANY TRANSACTION to dms_user
GRANT SELECT on V_$ARCHIVED_LOG to dms_user
GRANT SELECT on V_$LOG to dms_user
GRANT SELECT on V_$LOGFILE to dms_user
GRANT SELECT on V_$DATABASE to dms_user
GRANT SELECT on V_$THREAD to dms_user
GRANT SELECT on V_$PARAMETER to dms_user
GRANT SELECT on V_$NLS_PARAMETERS to dms_user
GRANT SELECT on V_$TIMEZONE_NAMES to dms_user
GRANT SELECT on V_$TRANSACTION to dms_user
GRANT SELECT on ALL_INDEXES to dms_user
GRANT SELECT on ALL_OBJECTS to dms_user
GRANT SELECT on DBA_OBJECTS to dms_user (required if the Oracle version is earlier than 11.2.0.3)
GRANT SELECT on ALL_TABLES to dms_user
GRANT SELECT on ALL_USERS to dms_user
GRANT SELECT on ALL_CATALOG to dms_user
GRANT SELECT on ALL_CONSTRAINTS to dms_user
GRANT SELECT on ALL_CONS_COLUMNS to dms_user
GRANT SELECT on ALL_TAB_COLS to dms_user
GRANT SELECT on ALL_IND_COLUMNS to dms_user
GRANT SELECT on ALL_LOG_GROUPS to dms_user
GRANT SELECT on SYS.DBA_REGISTRY to dms_user
GRANT SELECT on SYS.OBJ$ to dms_user
GRANT SELECT on DBA_TABLESPACES to dms_user
GRANT SELECT on ALL_TAB_PARTITIONS to dms_user
GRANT SELECT on ALL_ENCRYPTED_COLUMNS to dms_user
GRANT SELECT on V_$LOGMNR_LOGS to dms_user
GRANT SELECT on V_$LOGMNR_CONTENTS to dms_user
GRANT SELECT on V_$STANDBY_LOG to dms_user

Following permission is required when using CDC so that AWS DMS can add to Oracle LogMiner redo logs for both 11g and 12c.

Grant EXECUTE ON dbms_logmnr TO dms_user;

Now we can go ahead and create the replication tasks from Source Oracle to Destination Redshift. Please check the attached link and screenshots for the configuration to be used in this setup.

Link 11 : https://docs.aws.amazon.com/dms/latest/sbs/CHAP_RDSOracle2Redshift.Steps.CreateMigrationTask.html

/* Ensure to select option ‘Migrate existing data and replicate ongoing changes’ */

The initial load will take sometime if your Oracle dataset which you are replicating is large and it depends on lot of factors eg: Network bandwidth, if your Oracle source database is busy, CPU and IOPS of your source and destination hardware etc. Like I suggested earlier before doing a production migration/replication do multiple dry runs so you have the timing narrowed down.

Finally do a check and compare your tasks results with expected results

Link 12 : https://docs.aws.amazon.com/dms/latest/sbs/CHAP_RDSOracle2Redshift.Steps.VerifyDataMigration.html

So once you have completed the above setup you will have a cdc logminer based on-going replication from you on-premise Oracle database to a Redshift cluster. Please refer these blog articles for high level steps of what you need to do to replicate Oracle to Amazon Redshift.

Link 13 : https://aws.amazon.com/getting-started/projects/migrate-oracle-to-amazon-redshift/
Link 14 : https://aws.amazon.com/blogs/database/how-to-migrate-your-oracle-data-warehouse-to-amazon-redshift-using-aws-sct-and-aws-dms/
Link 15 :IMPORTANT ! : https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Configuration

Few pre-cautions we can recommend for this scenario from Oracle and Redshift is :

  1. For migrations with a high volume of changes, LogMiner might have some I/O or CPU impact on the computer hosting the Oracle source database. Binary Reader has less chance of having I/O or CPU impact because the archive logs are copied to the replication instance and mined there. Check this link to learn more on the different reader modes for Oracle : https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.Oracle.html#CHAP_Source.Oracle.Configuration
  2. Ensure you always continuously monitor your source and destination servers for any performance issues. Specially Oracle being largely used for transactional systems is critical, and it can impact your application if the database performance goes down.
  3. Primary key constraint is not enforced in Redshift – primary constraint can be defined in DDL but Redshift only keeps the definition in data dictionary. It is possible to have duplicate rows even if a table’s DDL has primary key defined.
  4. Redshift is at its core an OLAP database/data warehouse unlike Oracle which can do both OLAP and OLTP. Write operations in Redshift are fundamentally slower compared to read operations. Though for the initial load from Oracle to Redshift DMS used COPY commands but any consequent updates/inserts will be run as a DML on Redshift. And atomic small inserts can be expensive in Redshift.

The post Create Ongoing Replication for Oracle On-Premise to Amazon Redshift using DMS appeared first on EasyOraDBA.

Replicate Schema’s Between 2 Autonomous Databases in Different Regions on OCI using Oracle GoldenGate 19c Marketplace

$
0
0

Oracle Autonomous Database now supports capturing of Data using an Extract and applying it to another Autonomous Database in another region. Previously using Goldengate you could only replicate to an Autonomous Database as a downstream Database but with the recently released feature you can now capture transactions at Source and replicate it to another Autonomous Database. So Autonomous DB’s can now act as Upstream Databases. This is useful in scenarios where you need to build Geographically distributed Apps using Autonomous

Steps

1. Provision 2 Transaction Processing Autonomous Databases one in Sydney and another in Ashburn

Australia East (Sydney) – Source : ProjectSYD
US East (Ashburn) – Target : ProjectDR

2. Provision Goldengate 19c Marketplace Microservices edition for Oracle and select deployment as Source and Target 19c, do not select Deployment 2 Autonomous option

See :  https://www.youtube.com/watch?v=dQbcrH8wVDs

3. Login to Golden Gate Compute VM, and get the Credentials

$ cat ~/ogg-credentials.json
{“username”: “oggadmin”, “credential”: “**********”}

Login to https://<public-ip> with the credentials displayment above. You can check the GoldenGate deployments > ServiceManager >
GoldenGate Config Home parameter /u02/deployments/ServiceManager/etc/conf

4. Check the Deployment config file which displays the Source and Target directory structure

$ cat /u02/deployments/ServiceManager/etc/conf/deploymentRegistry.dat

“Source”: {
“environment”: [
{
“name”: “TNS_ADMIN”,
“value”: “/u02/deployments/Source/etc”
}
]
}

“Target”: {
“environment”: [
{
“name”: “TNS_ADMIN”,
“value”: “/u02/deployments/Target/etc”
}
]
}


5. Copy Wallets from Both Source and Target Autonomous Database to TNS_ADMIN location directory as displayed in above command

-rw-r–r–@ 1 shadab staff 20K 7 Jan 13:02 Wallet_ProjectSYD.zip
-rw-r–r–@ 1 shadab staff 20K 7 Jan 13:26 Wallet_ProjectDR.zip

$ sftp -i “mydemo_vcn.priv” opc@<public-ip-of-GoldenGateVM>

sftp> put Wallet_ProjectSYD.zip
Uploading Wallet_ProjectSYD.zip to /home/opc/Wallet_ProjectSYD.zip
Wallet_ProjectSYD.zip 100% 20KB 1.2MB/s 00:00

sftp> put Wallet_ProjectDR.zip
Uploading Wallet_ProjectDR.zip to /home/opc/Wallet_ProjectDR.zip
Wallet_ProjectDR.zip

6. Unzip both the wallets in both Source and Target Directories and change the sqlnet.ora file WALLET_LOCATION parameter to point to the respective TNS_ADMIN directory

$ cp -p Wallet_ProjectSYD.zip /u02/deployments/Source/etc
$ cp -p Wallet_ProjectDR.zip /u02/deployments/Target/etc

$ cd /u02/deployments/Source/etc
$ unzip Wallet_ProjectSYD.zip
Archive: Wallet_ProjectSYD.zip
inflating: README
inflating: cwallet.sso
inflating: tnsnames.ora
inflating: truststore.jks
inflating: ojdbc.properties
inflating: sqlnet.ora
inflating: ewallet.p12
inflating: keystore.jks

$ cd /u02/deployments/Target/etc
$ unzip Wallet_ProjectDR.zip
Archive: Wallet_ProjectDR.zip
inflating: README
inflating: cwallet.sso
inflating: tnsnames.ora
inflating: truststore.jks
inflating: ojdbc.properties
inflating: sqlnet.ora
inflating: ewallet.p12
inflating: keystore.jks

eg: Source
vi /u02/deployments/Source/etc/sqlnet.ora

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY=”/u02/deployments/Source/etc”)))
SSL_SERVER_DN_MATCH=yes

Target
vi /u02/deployments/Target/etc/sqlnet.ora

WALLET_LOCATION = (SOURCE = (METHOD = file) (METHOD_DATA = (DIRECTORY=”/u02/deployments/Target/etc”)))
SSL_SERVER_DN_MATCH=yes

7. — Source Setup —

a. Create Schema and Table which needs to be replicated
$ cd /u02/deployments/Source/etc

$ /u01/app/client/oracle19/bin/sql /nolog

set cloudconfig /u02/deployments/Source/etc/Wallet_ProjectSYD.zip
show tns

connect admin/Rabbithole321#@projectsyd_high

create user goldengateusr identified by PassW0rd_#21 default tablespace DATA quota unlimited on DATA;
create table goldengateusr.accounts (id number primary key, name varchar2(100));
insert into goldengateusr.accounts values (1,’Shadab’);
commit;
select * from goldengateusr.accounts;

b. Unlock ggadmin user and enable supplemental log data

alter user ggadmin identified by PassW0rd_#21 account unlock;
ALTER PLUGGABLE DATABASE ADD SUPPLEMENTAL LOG DATA;
select minimal from dba_supplemental_logging;

select to_char(current_scn) from v$database;
16767325762804

c. Create extract parameter file

$ mkdir /u02/trails/dirdat
$ vi /u02/deployments/Source/etc/conf/ogg/ext1.prm

EXTRACT ext1
USERID ggadmin@projectsyd_high, PASSWORD PassW0rd_#21
EXTTRAIL ./dirdat/sy
ddl include mapped
TABLE goldengateusr.*;

d. Add the extract to source
$ /u01/app/ogg/oracle19/bin/adminclient

CONNECT https://localhost/ deployment Source as oggadmin password DFA9zOjlh0GY%GpI !

ALTER CREDENTIALSTORE ADD USER ggadmin@projectsyd_high PASSWORD PassW0rd_#21 alias projectsyd_high

DBLOGIN USERIDALIAS projectsyd_high

ADD EXTRACT ext1, INTEGRATED TRANLOG, SCN 16767325762804
REGISTER EXTRACT ext1 DATABASE
ADD EXTTRAIL ./dirdat/sy, EXTRACT ext1

START EXTRACT ext1
INFO EXTRACT ext1, DETAIL

The status should be ‘running’

e. Insert rows in source table

/* Insert another row in source table */
insert into goldengateusr.accounts values (2,’John Doe’);
insert into goldengateusr.accounts values (3,’Mary Jane’);
commit;

f. Take a datapump backup of the schema until the SCN to the internal directory ‘DATA_PUMP_DIR’

export ORACLE_HOME=’/u01/app/client/oracle19′
export TNS_ADMIN=’/u02/deployments/Source/etc’

$ /u01/app/client/oracle19/bin/expdp ADMIN/Rabbithole321#@projectsyd_high directory=DATA_PUMP_DIR dumpfile=export01.dmp logfile=export.log schemas=goldengateusr FLASHBACK_SCN=16767325762804

g. Create Bucket, Auth Token for access and DBMS_CLOUD credentials to copy export backup to Customer bucket
Create a Bucket in your tenancy called ‘datapump’ and create an Auth Token for your OCI user which has read/write permissions to this bucket

$ /u01/app/client/oracle19/bin/sqlplus admin/Rabbithole321#@projectsyd_high

BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => ‘LOAD_DATA’,
username => ‘oracleidentitycloudservice/shadab.mohammad@oracle.com‘,
password => ‘CR+R1#;4o5M[HJPgsn);’
);
END;
/

— BEGIN
— DBMS_CLOUD.drop_credential(credential_name => ‘LOAD_DATA’);
— END;
— /

BEGIN
DBMS_CLOUD.PUT_OBJECT (‘LOAD_DATA’,’ https://objectstorage.ap-sydney-1.oraclecloud.com/n/ocicpm/b/datapump/’,’DATA_PUMP_DIR’,’export01.dmp‘);
END;
/

select object_name, bytes from dbms_cloud.list_objects(‘LOAD_DATA’,’https://objectstorage.ap-sydney-1.oraclecloud.com/n/ocicpm/b/datapump/‘);

8. — Target Setup —

a. Create DBMS_CLOUD credential on target
cd /u02/deployments/Target/etc/

export ORACLE_HOME=’/u01/app/client/oracle19′
export TNS_ADMIN=’/u02/deployments/Target/etc/’

$ /u01/app/client/oracle19/bin/sqlplus admin/Rabbithole321#@projectdr_high

BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => ‘LOAD_DATA’,
username => ‘oracleidentitycloudservice/shadab.mohammad@oracle.com‘,
password => ‘CR+R1#;4o5M[HJPgsn);’
);
END;
/

select object_name, bytes from dbms_cloud.list_objects(‘LOAD_DATA’,’https://objectstorage.ap-sydney-1.oraclecloud.com/n/ocicpm/b/datapump/‘);

b. Unlock ggadmin user on target and enable supplemental log data

alter user ggadmin identified by PassW0rd_#21 account unlock;
ALTER PLUGGABLE DATABASE ADD SUPPLEMENTAL LOG DATA;
select minimal from dba_supplemental_logging;

c. Import the datapump backup from customer bucket to Target ADB

$ /u01/app/client/oracle19/bin/impdp admin/Rabbithole321#@projectdr_high credential=LOAD_DATA schemas=goldengateusr directory=DATA_PUMP_DIR dumpfile=https://objectstorage.ap-sydney-1.oraclecloud.com/n/ocicpm/b/datapump/o/export01.dmp logfile=import.log

d. Create replicat parameter file

$ vi /u02/deployments/Target/etc/conf/ogg/repl1.prm

Replicat repl1
USERID ggadmin@projectdr_high, PASSWORD PassW0rd_#21
map goldengateusr.*, target goldengateusr.*;

e. Create replicat in Target ADB

$ /u01/app/ogg/oracle19/bin/adminclient

CONNECT https://localhost deployment Target as oggadmin password DFA9zOjlh0GY%GpI !
ALTER CREDENTIALSTORE ADD USER ggadmin@projectdr_high PASSWORD PassW0rd_#21 alias projectdr_high
DBLOGIN USERIDALIAS projectdr_high

ADD CHECKPOINTTABLE ggadmin.chkpt
Add Replicat repl1 exttrail ./dirdat/sy CHECKPOINTTABLE ggadmin.chkpt

Start Replicat repl1
info replicat repl1, DETAIL

Status should be ‘running’

9. Now that the replication has started, insert few records in source table and you should be able to see them in target DB. Review /u02/deployments/Target/var/log/ggserr.log for any errors related to the replication

–Source–
export ORACLE_HOME=’/u01/app/client/oracle19′
export TNS_ADMIN=’/u02/deployments/Source/etc/’

$ /u01/app/client/oracle19/bin/sqlplus admin/Rabbithole321#@projectsyd_high

select * from goldengateusr.accounts;

insert into goldengateusr.accounts values (4,’Foo Bar’);
insert into goldengateusr.accounts values (5,’Dummy Value’);
commit;

–Target —
export ORACLE_HOME=’/u01/app/client/oracle19′
export TNS_ADMIN=’/u02/deployments/Target/etc/’

$ /u01/app/client/oracle19/bin/sqlplus admin/Rabbithole321#@projectdr_high

select * from goldengateusr.accounts;

We should now be able to see the new records in the Target DR Database.

10. Since we have included the DDL in the Extract, we can also create a table in Source and it will be auto-magically replicated to the Target

–Source–
create table goldengateusr.cardholder (id number primary key, cardno varchar2(30));

insert into goldengateusr.cardholder values(1,’1234-5677-9876-8765′);
commit;

–Target —
desc goldengateusr.cardholder ;
select * from goldengateusr.cardholder ;

References:
————
[1] https://blogs.oracle.com/dataintegration/free-goldengate-software-on-oci-marketplace
[2] https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/tutorial-getting-started-autonomous-db/index.html
[3] https://docs.oracle.com/en/middleware/goldengate/core/19.1/oracle-db/configure-autonomous-database-capture-replication.html#GUID-6AF0D1AC-FA05-41E8-ADA2-2F6820C68D5C
[4] https://docs.oracle.com/en/middleware/goldengate/core/19.1/oracle-db/using-ogg-autonomous-databases.html#GUID-660E754E-B9A6-48DD-AA66-0D6B66A022CD

The post Replicate Schema’s Between 2 Autonomous Databases in Different Regions on OCI using Oracle GoldenGate 19c Marketplace appeared first on EasyOraDBA.


Install Statspack for Performance Tuning on OCI VMDB Standard Edition Databases

$
0
0

— Go to Directory location for Statspack Install scripts on the VMDB host —
cd /u01/app/oracle/product/12.2.0.1/dbhome_1/rdbms/admin

— From cdb$root —

sqlplus “/as sysdba”

@spcreate.sql

Enter value for perfstat_password: P@ssword1234#_

— Check Level of Statistics —
SELECT * FROM stats$level_description ORDER BY snap_level;

— Gather the Stats of PERFSTAT schema before we begin —
exec dbms_stats.gather_schema_stats(‘PERFSTAT’);

— Connect with the PERFSTAT user and generate a sample snapshot —

sqlplus perfstat/Statspack1234#_

SQL> exec statspack.snap;

— Create the Statspack Auto job (it creates a snapshot every one hour) —

SQL> @?/rdbms/admin/spauto.sql

— Verify the jobs —

SQL> alter session set nls_date_format=’dd/mm/yyyy hh24:mi:ss’;

SQL> select job, what, LAST_DATE, NEXT_DATE, TOTAL_TIME, BROKEN, FAILURES from dba_jobs where SCHEMA_USER=’PERFSTAT’;

Take another snapshot to get the two snapshots to generate the statspack report:

SQL> exec statspack.snap;

— Check the snapshots in system view stats$snapshot —

SQL> select name, snap_id, to_char(snap_time, ‘DD/MM/YYYY HH24:MI:SS’) “Snapshot Time” from stats$snapshot,v$database;

NAME SNAP_ID Snapshot Time

ABC 3 08/02/2021 17:00:18
ABC 1 08/02/2021 16:52:55
ABC 2 08/02/2021 16:54:07

— Generate the report between the created snapshots run the script –

SQL> @?/rdbms/admin/spreport.sql

The post Install Statspack for Performance Tuning on OCI VMDB Standard Edition Databases appeared first on EasyOraDBA.

Accessing OCI Buckets from AWS Python SDK

$
0
0

1. Create Customer Secret Key from OCI User settings. S3 compatibility key is now called “customer secret key”

Link : https://docs.cloud.oracle.com/en-us/iaas/Content/Identity/Tasks/managingcredentials.htm#Working2

2. Install Python packages –> oci, awscli, boto3 (boto3 is Amazons Python SDK)

3. Add the secret key credentials to a .py file using below code. This will list out your OCI buckets and upload a sample file in /tmp from your local instance(VM) to OCI bucket. In my case the bucket is called “Shadab-DB-Migrate”

import boto3
import oci

config= oci.config.from_file(“~/.oci/config”)

s3 = boto3.resource(
‘s3’,
region_name=”ap-sydney-1″,
aws_secret_access_key=”**************”,
aws_access_key_id=”****************************************”,
endpoint_url=”https://ocicpm.compat.objectstorage.ap-sydney-1.oraclecloud.com
)

# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)

# Upload a File to you OCI Bucket, 2nd value is your bucket name
s3.meta.client.upload_file(‘/tmp/hello.txt’, ‘Shadab-DB-Migrate’, ‘hello.txt’)


Reference:
——————-
[1] https://docs.cloud.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm
[2] https://docs.cloud.oracle.com/en-us/iaas/Content/Identity/Tasks/managingcredentials.htm#Working2LikeBe the first to like this

The post Accessing OCI Buckets from AWS Python SDK appeared first on EasyOraDBA.

HTTP 404 Not Found The Request Could Not Be Mapped To Any Database After ORDS_PUBLIC_USER Expired

$
0
0

Error : The request could not be mapped to any database. Check the request URL is correct, and that URL to database mappings have been correctly configured

Reason : This error usually occurs when you ORDS_PUBLIC_USER password has expired and after you reset the password you need to add it to the apex.xml and apex_pu.xml file again

Oracle Support Doc ID : https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=86184332802078&parent=EXTERNAL_SEARCH&sourceId=PROBLEM&id=2572161.1&_afrWindowMode=0&_adf.ctrl-state=d16blnlau_274

Solution :

  1. Reset ORDS_PUBLIC_USER in CDB alter user ords_public_user identified by Password1234#_ account unlock ;
  2. Uninstall ORDS : java -jar ords.war uninstall
  3. Go to ORDS directory and ords/conf and for each connection pool XML file edit it and modify the db.password field for all the XML files with the new password for ords_public_user. Make sure to precede the password with ! to enforce encryption after saving it.

cat apex_pu.xml

“db.password”>@05F48C12F1881222040B8C395A131310F5F7E80E27923EE0C2

Change the entry db.password with ! and new password and save the file

“db.password”>!Password1234#_

  1. Reinstall ORDS

java -jar ords.war install

The post HTTP 404 Not Found The Request Could Not Be Mapped To Any Database After ORDS_PUBLIC_USER Expired appeared first on EasyOraDBA.

Shell Script to Keep Oracle Always Free Autonomous Database Alive with SQLCL

$
0
0

After searching on the internet long and hard I couldn’t find a quick shell script to access Oracle Autonomous Database using SQLCL. This is important for me to keep my Always Free DB Instances alive. If you do not make a HTTPS or SQL*NET connection to your Autonomous Always free DB then it shuts down in 7 days and in 90 continuous days of inactivity it is deleted.

The same does not apply to the Always Free Linux instances, so what I did was downloaded the Instance wallets onto my Always Free OL7 instance and put this shell script in crontab to run every 3hrs to ensure my instances are never shutdown. This is a simple hack/solution and here it goes

  1. Download Instance Autonomous Wallet onto your Linux Instance
  2. Download and Install SQLCL
  3. Save this Script as a shell script file ‘Load_ADB.sh’
#!/bin/bash
username='admin'
password='YourP@ssw0rd$#'
password2='YourP@ssw0rd33$#'
sqlquery='select sysdate from dual;'
/home/opc/sqlcl/bin/sql /nolog <<-EOF
set cloudconfig /home/opc/Wallet_primepay.zip
show tns
connect $username/$password@primepay_high
$sqlquery
EOF
/home/opc/sqlcl/bin/sql /nolog <<-EOF
set cloudconfig /home/opc/Wallet_Online.zip
show tns
connect $username/$password2@online_high
$sqlquery
EOF

4. Schedule it in crontab

* */3 * * * bash /home/opc/Load_ADB.sh >> /tmp/adbquery.log

That’s it a simple hack to keep your Always Free ADB Instances alive indefinetly 🙂

The post Shell Script to Keep Oracle Always Free Autonomous Database Alive with SQLCL appeared first on EasyOraDBA.

Migrate ORDS & Apex from OCI-C to OCI VMDB

$
0
0

In OCI-C ORDS and Apex is installed by default but in OCI to keep things lightweight ORDS and Apex is not bundled on VMDB systems. If you are migrating a DB from OCI Classic to OCI then you need to migrate Apex and ORDS manually. In this tutorial we will migrate both components after doing a migration of the Database


Steps in OCI-C (Source)

  1. Log in with Oracle user and tar the binaries for ORDS and Apex. In OCI-C ORDS and Apex resides in path ‘/u01/app/oracle/product’

$ sudo su oracle

$ cd /u01/app/oracle/product

$ tar cvf apex_prod.gz apex/

$ tar cvf ords_prod.gz ords/

  1. FTP the files out of OCI-C and copy to Target OCI Gen2 Instance
  2. Make a Physical clone of the DB using a tool like ZDM or manually with RMAN

Steps IN OCI (Target)

sudo chown oracle:oinstall *.gz

  1. Untar files to directory ‘/u01/app/oracle/product/’

$ sudo tar xvf ords_prod.gz -C /u01/app/oracle/product/

$ sudo tar xvf apex_prod.gz -C /u01/app/oracle/product/

  1. Configure ORDS Apex for New OCI Host
  • Check for hostname and service name entry in file. We have to change these as the target hostname and service name are different in OCI Gen2 –

$ cd /u01/app/oracle/product/ords

$ grep -Erni ‘PROD.1111111.oraclecloud.internal’

$ grep -Erni ‘classichost.compute-1111111.oraclecloud.internal’

Change the strings to new OCI hostname and service name in the files listed above (You can leave out the log files)

  1. Check if java is installed, if not install JDK

$ java -version

  1. Create Self-Signed SSL certificates for new host

$ hostname

ocigen2.sub111111111.ivl.oraclevcn.com

$ cd /u01/app/oracle/product/ords/conf/ords/standalone/

$ mkdir certs

$ cd certs

— Create a self-signed certificate with openssl utility the above hostname as the CName and modify other attributes according to your need

$ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -subj ‘/CN=ocigen2.sub111111111.ivl.oraclevcn.com’

This will create 2 files key.pem and cert.pem

— Convert the key.pem to its der file as below:
$ openssl pkcs8 -topk8 -inform PEM -outform DER -in key.pem -out key.der -nocrypt

— Similarly convert cert.pem to its crt file
$ openssl x509 -outform der -in cert.pem -out cert.crt

— Edit standalong.properties file and add the new key and der file and restart ORDS

$ vi /u01/app/oracle/product/ords/conf/ords/standalone/standalone.properties

ssl.cert=/u01/app/oracle/product/ords/conf/ords/standalone/certs/cert.crt
ssl.cert.key=/u01/app/oracle/product/ords/conf/ords/standalone/certs/key.der
standalone.context.path=/ords
standalone.static.context.path=/i
standalone.static.do.not.prompt=true
standalone.scheme.do.not.prompt=true
jetty.port=8080
jetty.secure.port=8181
ssl.host=ocigen2.sub111111111.ivl.oraclevcn.com
standalone.doc.root=/u01/app/oracle/product/ords/conf/ords/standalone/doc_root

Save and exit

  1. Start ORDS

— Stop if already running —
$ ps -ef | grep ords.war

$ kill -9

$ cd /u01/app/oracle/product/ords

$ java -jar ords.war

— Put it in background in Background–
ctrl-z^

$ bg

  1. Install firewalld and create port forwarding of ORDS port 8181 to 443 and allow the port

–Enable YUM on OCI VMDB node, as it is not enabled by default —
$ curl -s http://169.254.169.254/opc/v1/instance/ |grep region
$ wget https://swiftobjectstorage.ap-sydney-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/oci_dbaas_ol7repo -O /tmp/oci_dbaas_ol7repo
$ wget https://swiftobjectstorage.ap-sydney-1.oraclecloud.com/v1/dbaaspatchstore/DBaaSOSPatches/versionlock_ol7.list -O /tmp/versionlock.list
$ sudo cp /tmp/oci_dbaas_ol7repo /etc/yum.repos.d/ol7.repo
$ sudo cp /etc/yum/pluginconf.d/versionlock.list /etc/yum/pluginconf.d/versionlock.list-date +%Y%m%d
$ sudo cp /tmp/versionlock.list /etc/yum/pluginconf.d/versionlock.list
$ sudo yum update

— Install Firewalld and create port rules —
$ sudo yum install firewalld
$ sudo firewall-cmd –zone=public –list-all
$ sudo systemctl start firewalld
$ sudo firewall-cmd –zone=public –list-all
$ sudo firewall-cmd –zone=public –add-port 443/tcp
$ sudo firewall-cmd –zone=public –add-port 8181/tcp
$ sudo firewall-cmd –add-forward-port=port=443:proto=tcp:toport=8181
$ sudo firewall-cmd –runtime-to-permanent
$ sudo systemctl restart firewalld
$ sudo firewall-cmd –zone=public –list-all

  1. Allow Port 443 to be accessed in the Security List or NSG of your VCN Subnet and NSG attached to DB

If you have executed all the steps correctly and everything was OK then you should be abe to access your Apex URL using the new DB HOST IP or Hostname in case you are using a Custom domain name

Since we installed ORDS Schema in CDB the URL will be like below :

Migrate ORDS & Apex from OCI-C to OCI VMDB appeared first on EasyOraDBA.

Viewing all 263 articles
Browse latest View live