Quantcast
Channel: EasyOraDBA
Viewing all 263 articles
Browse latest View live

Productivity Hack : Bash Shell Script to Show Current Time in Different Timezones

$
0
0

I regularly work with my colleagues and customers based out in multiple time zones. Referencing time converter websites or doing mental maths gymnastics to convert time is not my thing, so I ended up creating a simple shell script for my commonly used timezone. You can add your own timezones and add an alias to call it from terminal to make it easier.

Script tested on Linux and Mac. It takes local times and outputs in different timezones of your choice

#!/bin/bash
echo "----------------------------"
echo "Local Time"
date
echo "----------------------------"
echo ""
echo "Time in Los Angeles"
export TZ=America/Los_Angeles
date
echo ""
echo "Time in San Francisco"
export TZ=US/Pacific
date
echo ""
echo "Time in Dallas,TX"
export TZ=US/Central
date
echo ""
echo "Time in New York"
export TZ=America/New_York
date
echo ""
echo "^"
echo "|"
echo "|"
echo "----------------------------"
echo "Time in UTC"
export TZ=UTC
date
echo "----------------------------"
echo "|"
echo "|"
echo "v"
echo ""
echo "Time in India-Mumbai"
export TZ=Asia/Calcutta
date
echo ""
echo "Time in Singapore"
export TZ=Asia/Singapore
date
echo ""
echo "Time in Perth"
export TZ=Australia/Perth
date
echo ""
echo "Time in Adelaide"
export TZ=Australia/Adelaide
date
echo ""
echo "Time in Sydney"
export TZ=Australia/Sydney
date
echo ""
echo "Time in Auckland"
export TZ=Pacific/Auckland
date
echo ""
unset TZ

Add an alias and call from command line

$ alias japactime='"/Users/shadab/Downloads/TIME.sh"'
$ japactime

The post Productivity Hack : Bash Shell Script to Show Current Time in Different Timezones appeared first on EasyOraDBA.


True Elasticity of Oracle Autonomous Database

$
0
0

Scaling Autonomous Database on OCI


Original Post First Appeared on my Medium Blog : https://shadabshaukat.medium.com/true-elasticity-of-oracle-autonomous-database-a4994b18a6c3



Introduction

Oracle’s Autonomous Database is a massively scalable serverless database available exclusively on Oracle Cloud Infrastructure. It is build on the proven reliability & maturity of the Oracle database over the last four decades. However nothing about it based on antiquity, It is a true serverless and elastic offering for persisting data for cloud native apps. You don’t need to provision any node types or define the number of nodes, just scale your opcu’s (virtual cores) and storage in increments of 1Terabyte and you are good to go. It can even scale automatically based on utilisation. It comes in 4 different flavours and 2 deployment models ; Data Warehouse, Transaction Processing, JSON, Low Code Development; with Shared Infrastructure or Dedicated Infrastructure as the deployment choice.

Having worked with the Oracle Autonomous Database since it was launched in 2018. I’ve been an ardent advocate of how both Enterprises and Startups can reap the benefits of running a truly cloud scale database. To prove it’s ability to provide a massively scalable persistent database for internet scale apps as well as Enterprise apps, I decided to perform a test to stretch (no pun intended) it’s elasticity.

Oracle Autonomous Database

Chronological transcribe on how I scaled the Oracle Autonomous Datawarehouse from 1 ocpu with 1TB of storage to 100 ocpu’s with 100TB of Storage and back; scaling both the compute and storage to 100x capacity and back.

So let’s get started

Initial Start State — State 1

We start with 1 TB of Storage with 1 ocpu

You can run most of your small projects on 1TB and 1 ocpu. But now let’s get adventurous and start scaling it up.

State 2 — Scale 30x Capacity

We are going for 30 ocpu’s with 30 TB of Storage. Enough size to accommodate your medium sized workloads and would suffice on most if not all Enterprise Data warehouses which hold structured data.

Here we go…

Wait what !? It scaled 30x in approximately 50 seconds. Yes you read that correct, it scaled up a Data warehouse to 30 times it’s compute and storage capacity in under a minute.

Here’s what the end result of state two looks like

State 3— Scale to 100x Capacity

Now we will start making things really interesting. We are going for a target of 100 ocpu’s with 100 TB of storage. So from our first state we are going for a 100x storage and compute increase. I had no hopes of this finishing as quickly, I was prepared to wait for atleast 30mins before it would return a success or failure. But once again Autonomous just completely surprised me ..

Have a peak at the start and stop times

It went from 30 ocpu’s with 30TB to 100TB with 100 ocpu’s in 63 seconds. Yes you read that correct! All this while your ETL jobs continue running and your BI dashboards keeps serving your end-users..

If you calculate the sum of of State 1 to State 3 transition, we are talking about a 100 times capacity increase in compute and storage in less than 2 minutes (113 seconds total). That is just mind numbing performance. I’ve not yet come across any cloud data warehouse which can increase it’s capacity while running your applications and scaling up the compute and storage so rapidly.

Final Transition — Back to State 1

Scaling up is all well and good but many cloud providers always speak of scaling up when speaking of elasticity, but true elasticity is scaling-up and scaling down. So in our final transition we will scale down the Autonomous database back to the state we started at i.e State 1

In the final transition we scale down from 100 ocpu’s, 100TB storage to 1 ocpu with 1TB storage.

Scaling back took about 3 mins 47 seconds. Still impressive considering it took away 100 times the capacity.

State Transition Summary

State Transition Timings

Final Run
As a final test I wanted to test scaling directly from 1 ocpu + 1 TB to 128 ocpu with 120TB storage directly without any cooling down period.

Result : 59 seconds to scale to 128 opcus with 120 TB’s of storage from 1 ocpu + 1TB. 128x compute scale-up and 120x storage scale up in under a minute. Just amazing!

State Transition — Final Run

Conclusion

It was an astonishing run and an amazing result. It took 1 min 53 seconds to scale 100x and 3 mins 47 seconds to scale back down. In total of 5 minutes 40 seconds we went 100 times the compute and storage size and back in 2 increments. And without a cooling period we were able to scale up from 1 ocpu, 1 TB to 128 ocpu’s and 120TB storage in a mere 59 seconds.

This was a real world run with no pre-provisioned capacity, it just shows how truly elastic the Autonomous database is.

I hope you saw the benefit of running the Oracle Autonomous database for growing your business without worrying about capacity. No matter how “big” the data requirements, you can attain capacity in the Oracle Cloud at the click of a button.

Notes

[1] The Autonomous database is available to try in an Always Free account which gives you two Autonomous databases with 20GB of Storage and 1 ocpu each. You can always convert your always free account to a paid account whenever you are ready to scale 100x 🙂

[2] This test was not supported by Oracle or any of the internal Oracle teams to ensure capacity is readily available. It was a completely spontaneous test for a proof of concept of Autonomous Database.

[3] Test was conducted in OCI – Australia East (Sydney) region.

The post True Elasticity of Oracle Autonomous Database appeared first on EasyOraDBA.

CDG-50611 CDG-50620 CDG-50605 dg_api DataGuard prechecks failed for stage VERIFY_DG_PRIMARY on Exadata on Oracle Public Cloud

$
0
0

On creating a Standby database on OCI Exadata Cloud service if you run into the below errors :

Error :
.————————————————-.
| RESULTS |
+—————————————-+——–+
| CHECK TYPE | STATUS |
+—————————————-+——–+
| check_file_creg | PASSED |
| check_file_sqlnet | PASSED |
| check_file_tnsnames | FAILED |
| db_status | PASSED |
| listener_status_listener | PASSED |
| listener_status_scan_listener | PASSED |
| node_status | PASSED |
| oracle_managed_files | PASSED |
| parameter_db_create_file_dest | PASSED |
| parameter_db_recovery_file_dest | PASSED |
| parameter_log_archive_config | PASSED |
| parameter_log_archive_dest_1 | FAILED |
| parameter_remote_listener | PASSED |
| space_check_/var/opt/oracle/dbaas_acfs | PASSED |
| space_check_RECO | PASSED |
| tnsport_check | PASSED |
| validate_sys_passwd | FAILED |
| wallet_size_check | PASSED |

|
+———–+———————————————————————————————————————————————+
| EXCEPTION | DETAILS |
+———–+———————————————————————————————————————————————+
| CDG-50611 | Parameter LOG_ARCHIVE_DEST_1 is not set |
| | Set parameter as ALTER SYSTEM SET LOG_ARCHIVE_DEST_1= |
| CDG-50620 | Pre-check failed on file ‘TNS_ADMIN/tnsnames.ora’ |
| | Check permissions, content and status of ‘TNS_ADMIN/tnsnames.ora’ |
| dg_api | CDG-50107 : DataGuard prechecks failed for stage VERIFY_DG_PRIMARY |
| | Refer the exceptions raised and fix the issues |
| | File: dg_api, Line#: 1632, Log: /var/opt/oracle/log/testdb/dbaasapi/db/dg/dbaasapi_VERIFY_DG_PRIMARY_2022-06-20_17:18:42.214072_36287.log |
‘———–+———————————————————————————————————————————————’
————————————————-+
| EXCEPTION | DETAILS |
+———–+———————————————————————————————————————————————-+
| CDG-50605 | Password validation failed for database ‘testdb’ |
| | Given password should match password set in db_wallet and database ‘testdb’ |
| CDG-50611 | Parameter LOG_ARCHIVE_DEST_1 is not set |
| | Set parameter as ALTER SYSTEM SET LOG_ARCHIVE_DEST_1= |
| dg_api | CDG-50107 : DataGuard prechecks failed for stage VERIFY_DG_PRIMARY |
| | Refer the exceptions raised and fix the issues |
| | File: dg_api, Line#: 1632, Log: /var/opt/oracle/log/testdb/dbaasapi/db/dg/dbaasapi_VERIFY_DG_PRIMARY_2022-06-17_00:39:32.946352_287312.log |
‘———–+————————————————-

Solution:

Location of DG logs on ExaCS
: /var/opt/oracle/log/<dbname>/dbaasapi/db/dg

eg: /var/opt/oracle/log/testdb/dbaasapi/testdb/dg

  1. Update permissions of TNS_ADMIN folder with Oracle user for the database

chmod 755 -R $ORACLE_HOME/network/admin/

  1. Create a new TNS entry in tnsnames.ora file like this in both Primary nodes. Use the Db name as name of the TNS entry with the service name of CDB

vi $ORACLE_HOME/network/admin/tnsnames.ora

testdb=
(DESCRIPTION=
(ADDRESS=
(PROTOCOL=TCP)
(HOST=demo-59938z-scan.dbclientsu.vcnsyd.oraclevcn.com)
(PORT=1521))
(CONNECT_DATA=
(SERVER=DEDICATED)
(SERVICE_NAME=testdb_df5_syd)
(FAILOVER_MODE=
(TYPE=select)
(METHOD=basic))))

  1. Modify SYS password with new sys password in db_wallet

Reference : How to change SYS Password On Data guard Associated databases-EXACC Gen 2 (Doc ID 2867554.1)

mkstore -wrl /var/opt/oracle/dbaas_acfs/testdb/db_wallet -viewEntry passwd

mkstore -wrl /var/opt/oracle/dbaas_acfs/testdb/db_wallet -modifyEntry passwd NewPassword321#_

mkstore -wrl /var/opt/oracle/dbaas_acfs/testdb/db_wallet -viewEntry passwd

  1. Set LOG_ARCHIVE_DEST_1

show parameter log_archive_dest_1

— Using the DB_RECOVERY_FILE_DEST parameter
alter system set LOG_ARCHIVE_DEST_1=’LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) MAX_FAILURE=1 REOPEN=5 DB_UNIQUE_NAME=testdb_df5_syd ALTERNATE=LOG_ARCHIVE_DEST_10′ scope=both sid=’*’;

show parameter log_archive_dest_1

  1. Create the Dataguard configuration again and check from the location

sudo -s

cd /var/opt/oracle/log/testdb/dbaasapi/db/dg

less dbaasapi_VERIFY_DG_PRIMARY_2022-06-20_19:33:06.638683_103596.log

.————————————————-.
| RESULTS |
+—————————————-+——–+
| CHECK TYPE | STATUS |
+—————————————-+——–+
| check_file_creg | PASSED |
| check_file_sqlnet | PASSED |
| check_file_tnsnames | PASSED |
| db_status | PASSED |
| listener_status_listener | PASSED |
| listener_status_scan_listener | PASSED |
| node_status | PASSED |
| oracle_managed_files | PASSED |
| parameter_db_create_file_dest | PASSED |
| parameter_db_recovery_file_dest | PASSED |
| parameter_log_archive_config | PASSED |
| parameter_log_archive_dest_1 | PASSED |
| parameter_remote_listener | PASSED |
| space_check_/var/opt/oracle/dbaas_acfs | PASSED |
| space_check_RECO | PASSED |
| tnsport_check | PASSED |
| validate_sys_passwd | PASSED |
| wallet_size_check | PASSED |
‘—————————————-+——–‘

As all checks are now passed, the standby database should be created successfully

The post CDG-50611 CDG-50620 CDG-50605 dg_api DataGuard prechecks failed for stage VERIFY_DG_PRIMARY on Exadata on Oracle Public Cloud appeared first on EasyOraDBA.

Viewing all 263 articles
Browse latest View live