Feed aggregator

UKOUG Breakthrough 2022 : Day 2

Tim Hall - Fri, 2022-12-02 14:19

Day 2 started pretty much the same as day 1. I arrived late to avoid the traffic. The first session I went to was Martin Nash with “Oracle Databases in a Multicloud World”. I was a bit late to this session, but from what I saw it seemed the general view was “don’t be stupid, … Continue reading "UKOUG Breakthrough 2022 : Day 2"

The post UKOUG Breakthrough 2022 : Day 2 first appeared on The ORACLE-BASE Blog.UKOUG Breakthrough 2022 : Day 2 was first posted on December 2, 2022 at 9:19 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

UKOUG Breakthrough 2022 : Day 1

Tim Hall - Fri, 2022-12-02 03:26

The evening before the conference the Oracle ACEs met up for some food at a curry place in the city centre. Thanks to the Oracle ACE Program for organising this! Earlier that day I presented my first face to face session in 3 years, and now it was time for my first social event in … Continue reading "UKOUG Breakthrough 2022 : Day 1"

The post UKOUG Breakthrough 2022 : Day 1 first appeared on The ORACLE-BASE Blog.UKOUG Breakthrough 2022 : Day 1 was first posted on December 2, 2022 at 10:26 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

How to change Launch Configuration to Launch Template in AWS

Pakistan's First Oracle Blog - Thu, 2022-12-01 21:59

Here is step by step guide as  how to change launch configuration to launch template in AWS for an autoscaling group. It's actually quite simple and straight forward.

There has been a notification in AWS that was sent out this week to following accounts that make use of Amazon EC2 Auto Scaling launch configurations. Amazon EC2 Launch Configurations will Deprecate support for new Instances. After December 31, 2022 no new Amazon Elastic Compute Cloud (Amazon EC2) instance types will be added to launch configurations. After this date, existing launch configurations will continue to work, but new EC2 instances will only be supported through launch templates.


In order to update the ASG, you need to follow below steps:


1. Create a launch template and paste the user data scripts in it and save it. Also, make sure that you are using the correct AMI ID in it.


2. Once launch template is created then navigate your respective auto scaling group and in the details section of the ASG, click on "edit" button in the launch configuration section.  There you will get an option on the top like "Switch to Launch Template".


3. Then select your newly created launch template and save the changes

Here is the document to create launch template.

Here is the document to know how to replace a launch configuration with a launch template.

The existing instances will keep in running state. Only new instances will be launched using launch template. On the ASG console, you can check the instance is launched using launch template in the instance management section. 

For the instances perspective testing like application is running or not or instance is working properly or not, for this you can login the instance and verify the details. It will not automatically launch an instance in the ASG after setting it to launch template. you would have to change the desired capacity to launch a new instance using the launch template.

Categories: DBA Blogs

tracking blocking sessions

Tom Kyte - Thu, 2022-12-01 04:46
hello sir, I want to see the blocking sessions that occur during the day to fix the sessions or SQLs that are causing the others to lock up. There is no trace of locker SQLs in the GV$ACTIVE_SESSION_HISTORY . The sql_id and top_level_sql_id fields specify the locked SQLs, but not the locker SQLs. How can I get it? thanks for your help.
Categories: DBA Blogs

Default privileges for new objects

Tom Kyte - Thu, 2022-12-01 04:46
Hi, is there a method to set default system privileges for all new created objects in Oracle, such as tables, sequences, procedures and functions and triggers? For example, select privilege assigned to an Oracle user. How to make it possible without having to write additional code except for e.g. create table, sequence, function etc.? Best regards, Dackard.
Categories: DBA Blogs

Need to if any provision for parallelly copying data from main table to different schema table

Tom Kyte - Thu, 2022-12-01 04:46
Hello Team, I would like to know is there any provision for parallelly copying data from main table (one schema) to different schema table without hampering current traffic? We need such provision/approach which should be faster way of doing this. Please suggest or guide us how we can do this? Please note: we want to create new schema which contains same table like in main schema. As soon as SQL operation happens on main schema table same should get copied to different new schema as well in faster way. We need to use this new schema for only reporting purpose so we need to develop this approach. please guide us on this as soon as possible. Thanks & Regards, Shital
Categories: DBA Blogs

PL/SQL Question , how to write a query to accept start and end booking date and display all hotel reservation between that date.

Tom Kyte - Thu, 2022-12-01 04:46
create table hotel_reservation with following fields. Booking id, booking start date booking end date room type, room rent write PL/SQL block to accept start and end booking date and display all hotel reservation between that date. *** In this question i am stuck in : *** create table hotel_reservation(booking_id number(20),booking_start_date date,booking_end_date date,room_type varchar2(50),room_rent number(20)); insert into hotel_reservation values(1, TO_DATE('10-05-2022', 'dd-mm-yyyy'), TO_DATE('12-05-2022', 'dd-mm-yyyy'), 'Double', 12000); insert into hotel_reservation values(2, TO_DATE('21-07-2022', 'dd-mm-yyyy'), TO_DATE('25-07-2022', 'dd-mm-yyyy'), 'Single', 5000); insert into hotel_reservation values(3, TO_DATE('01-02-2022', 'dd-mm-yyyy'), TO_DATE('02-02-2022', 'dd-mm-yyyy'), 'Luxury', 30000); insert into hotel_reservation values(4, TO_DATE('30-06-2022', 'dd-mm-yyyy'), TO_DATE('01-07-2022', 'dd-mm-yyyy'), 'Double', 10000); insert into hotel_reservation values(5, TO_DATE('15-10-2022', 'dd-mm-yyyy'), TO_DATE('15-10-2022', 'dd-mm-yyyy'), 'Quad', 11000); select * from hotel_reservation; DECLARE book_id hotel_reservation.booking_id%type; book_sdate hotel_reservation.booking_start_date%type := TO_DATE('10-05-2022', 'dd-mm-yyyy'); book_edate hotel_reservation.booking_end_date%type := TO_DATE('15-10-2022', 'dd-mm-yyyy'); r_type hotel_reservation.room_type%type; r_rent hotel_reservation.room_rent%type; BEGIN Select booking_id,booking_start_date,booking_end_date,room_type,room_rent INTO book_id, book_sdate, book_edate,r_type,r_rent FROM hotel_reservation WHERE booking_start_date = book_sdate and booking_end_date = book_edate; dbms_output.put_line('hotel_reservation ' || book_id || ' ' || book_sdate || ' ' || book_edate || ' ' || r_type || ' ' || r_rent); END; *** in the begin block of pl/sql ***
Categories: DBA Blogs

Split String with table function

Tom Kyte - Thu, 2022-12-01 04:46
Hi I use this table function for string split (a space is used as a separator) create or replace FUNCTION fmv_space_to_table(p_list IN VARCHAR2) RETURN fmv_test_type AS l_string VARCHAR2(32767) := p_list || ' '; l_comma_index PLS_INTEGER; l_index PLS_INTEGER := 1; l_tab fmv_test_type := fmv_test_type(); BEGIN LOOP l_comma_index := INSTR(l_string, ' ', l_index); EXIT WHEN l_comma_index = 0; l_tab.EXTEND; l_tab(l_tab.COUNT) := TRIM(SUBSTR(l_string, l_index, l_comma_index - l_index ) ); l_index := l_comma_index + 1; END LOOP; RETURN l_tab; END fmv_space_to_table; The table function works fine: select * from table( fmv_space_to_table( 'A2345 123456 7890 2344')) Output: A2345 123456 7890 2344 When table function "fmv_space_to_table" is used in a query with only number values, it works fine e.g. ... s.productnumber IN ( select * from table( fmv_space_to_table( '123456 7890')) ) When table function "fmv_space_to_table" is used in a query with letter + number values, it doesn't work e.g. ... s.productnumber IN ( select * from table( fmv_space_to_table( 'A2345')) ) Error: ORA-00904: "A2345": ungultige ID cause of error: my table function returns as output: A2345 but in this case is needed: 'A2345' Question: How can the table function "fmv_space_to_table" be extended so that it can also be used with letters + numbers Values?
Categories: DBA Blogs

ORA-00001 on merge, concurent transactions

Tom Kyte - Thu, 2022-12-01 04:46
When having two simultaneous merges into a table with a unique constraint in different sessions, the second one throws ORA-00001. Steps to reproduce: <code>create table tab1 (col1 number); create unique index ind1 on tab1(col1); --1st session: merge into tab1 d using ( select 1 col1 from dual ) s on (s.col1 = d.col1) when not matched then insert (col1) values(s.col1); --second session: merge into tab1 d using ( select 1 col1 from dual ) s on (s.col1 = d.col1) when not matched then insert (col1) values(s.col1); --second session now hangs --first session: commit; --second session: --throws ora-00001</code> As far as I know, this might be how Oracle behaves since Merge statment exists and might be considered to work as expected. My objection to this is that merge behaves differently depending on first session being commited or not before the second session starts it's merge. The second session obviously knows it cannot just insert and hangs, waits for the first session to finish. And the second session is only blocked (hangs) if it is working with the same key of unique index. So, again, the second session is obviously aware that there is an uncommited session working on the same record. So when the firts session commits the record to the table (results in insert), the second session already desided that it too should insert and violates the constraint. But why? Shouldn't the second session re-evaluate what the correct actions is? If we tried this with [merge when matched] then the second session hangs until first one commits but even though the constrained columns is updated, there is no issue for the second session to update the same record. Does it make sense or do you think "it works as intended" and shouldn't be addressed?
Categories: DBA Blogs

Birmingham City University (BCU) : The changing role of the DBA

Tim Hall - Thu, 2022-12-01 03:35

Yesterday I took a trip across town to Birmingham City University (BCU) to do a talk to the students. The talk was called “The changing role of the DBA”. It’s been over 3 years since I’ve done a face-to-face presentation. I did some online presentations at the start of lockdown, but it’s been 2 years … Continue reading "Birmingham City University (BCU) : The changing role of the DBA"

The post Birmingham City University (BCU) : The changing role of the DBA first appeared on The ORACLE-BASE Blog.Birmingham City University (BCU) : The changing role of the DBA was first posted on December 1, 2022 at 10:35 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AWS re:Invent day 3 – Gray failures mitigation

Yann Neuhaus - Wed, 2022-11-30 19:36

So again during the 3rd day of the re:Invent 2022 there were plenty of interesting information to take with us. However, the most interesting part was for sure the workshop about gray failures and how to mitigate them in a multi-AZ environment. Let me try to wrap this up for you…

What are gray failures?

No idea what “gray failures stands” for? Let us try to reminder this situation, we for sure all already experienced, when users say “application ain’t working! Can’t work..” while at the same time on infrastructure side you hear “Nope everything green, no alerts there…..must be the network ;-)”

That’s grey failures. It means a non-binary situation where everything is working and not working at the same time. This kind of distorsion between application / users perception and system one.

It is typically a conjonction of “small” issues that may even not breached any monitoring alerts thresholds, but that taken together makes the application unusable. Some researches even state that the biggest cloud outages are coming from these gray failures as part the so called failure circle.

Using a multi-AZ architecture on AWS, you can indeed try to detect and mitigate these gray failures.

Lets take a example a web based application running on 3 availability zone with load balancer and front-end web servers (as part of an auto scaling group) and a database as backend.

How to detect gray failures?

In case of gray failures our traditional monitoring approach (i.e. CPU > 96%, latency > 60ms, …) isn’t enough. Again the CPU may only be 95%…but the latency may also have reached 59ms…and at the same time we face some unusual amount of retransmitted packets due to a “tired” switch.

Do ou remember about “Murphy Law”? Everything which could go wrong…goes wrong…

On AWS CloudWatch can help you there

Using AWS CloudWatch you will at first create your traditional alerts (CPU, latency, …) but also region availability and latency. Then a second step, CloudWatch will allow you to create so called “composite alarms” which are exactly based on this principle of combining several alarms / metrics using simple AND OR NOT expressions

You can find much more information about detecting such gray failures in the AWS Advanced Multi-AZ Resilience Patterns whitepaper.

How to react?

Now that we have an idea on how we can detect these gray failure, the obvious questions is “how to react when it happens”.

The principle is pretty “easy”: Evict the AZ which is facing issues from our Multi-AZ architecture

This is the same principle than a cluster evicting a failed node.

To do so we can act on 2 levels:

1. Stop sending request to the LB of the affecting Load Balancer by deactivating this “route” in our Route53 DNS configuration

2. Stopping the instances in the faulty AZ and making sure the auto scaling group will bootstrap new ones in the remaining AZ

Disabling the DNS in Route53

Here the global idea. When the alert is raised you going to set a simple value in a DynamoDB table staing that the corresponding AZ in unhealthy.

On Route53 you will have linked each DNS entry to an Health Check

What does this health check, which has been configured priori in Route53?
It will call the Amazon API Gateway to do a REST request to the DynamoDB table to check the healthy state of the corresponding AZ is.

If the REST API comes back stating that the AZ is not healthy anymore (by sending a 4xx code back) then Route53 won’t send this path back until it get healthy again.

You can also have a look on the AWS article Creating Disaster Recovery Mechanisms Using Amazon Route 53.

Updating the auto scaling group

So now that our users aren’t sent to the faulty AZ anymore, the next step is to update the auto scaling group for my web front-end instances. This should automatically shutdown the one in the faulty AZ and start new ones in the remaining AZ.

To do so the idea is again to use a table in DynamoDB in which we will store the subnets of the faulty AZ.

Finally we will simply run a Lambda function to update the auto scaling group to remove those subnets. As a consequence the instances running in them will be shutted down and new ones will be starting in the remaining AZ.

L’article AWS re:Invent day 3 – Gray failures mitigation est apparu en premier sur dbi Blog.

Installing Azure AZ Module on Windows

Yann Neuhaus - Wed, 2022-11-30 13:52

Since Azure (or AzureRM) module is marked as deprecated some years ago, AZ module is the official replacement to manage Azure resources with PowerShell.

This module is running with at least PowerShell 7.0.6 LTS (Long Term Support) or PowerShell 7.1.3, but higher versions are recommended.

When PowerShell from 1.0 to 5.1 are component of Windows operating systems, PowerShell 7 is cross-platform and is at top of .NET Core.
As a result, PowerShell Core has to be deployed on the environment to use this module.

If you try to use AZ module on Windows PowerShell, you will encounter the following error:

Indeed, I have not the recommended versions specified in the documentation:

For my example, I needed to install PowerShell core. To do so, I did an installation with an msi installer with PowerShell TLS release.
This kind of release only contains security fixes and servicing fixes to minimize the impact on Prod environments. They are of course included in Microsoft Lifecycle Policy.

NOTE: PowerShell 7.0 is no more supported, and only at least PowerShell 7.1 is now available.

After the my previous installation, here is the version I get:

This new module allows you to download only the necessary packages you need, which avoids having too much resources loaded in your PowerShell Console.

Keep in mind that executing Install-Module -Name AZ will install ALL packages:

In most cases, you will prefer only installing packages relative to Azure resources you want to manage.

In addition, using Import-Module -Name AZ will load all packages.
Here is an extract of the PowerShell Module Script associated to AZ module:

If you still have scripts running with AzureRM module, you should update them before 29 February 2024.

L’article Installing Azure AZ Module on Windows est apparu en premier sur dbi Blog.

AWS re:Invent 2022 – Day 2

Yann Neuhaus - Wed, 2022-11-30 10:30

The second day usually starts with the keynote from Adam Selipsky, CEO of AWS. The keynote is not a technical session but it’s a good place to learn about new products or services launches. In such big event, it’s show as well. When you enter the room, there is a band playing music and I liked the setlist full of rock music!

Announcements

One part of the keynote was dedicated to data and how to discover, ingest and process it. In that area, he announced Amazon Aurora integration with Amazon Redshift and Amazon Redshift integration with Apache Spark. The idea is to run analytics query on near realtime transactional data. For visualisation of your data, he introduced a new question supported by Q in Amazon Quicksight.

The idea of this blog is not to list all the new launches. You can get a full list of new announcements on a dedicated AWS blog: Top announcements of AWS re:Invent 2022.

Audit and compliance

Today, I didn’t attend any workshop but I was in a builders session with David. These sessions are really appreciated and you need to be fast to save your spot. It looks like a workshop but there are less seats and there is one dedicated speaker for each table. We configured CloudTrail Lake in a demo account. Then we ran few queries to explore the audit data. I invite you to look at David’s blog AWS re:Invent day 2: All about Lakes for more details about the feature itself.

Learning with games

I also wanted to try a new session type for me, gamified learning. I decided to go easy for my first time. Indeed I choose to participate to a session where you take the quests at your own pace without competition.

Once seated, I registered to a platform and the pitch is quite simple: I’m a new employee of the startup Unicorn and I got some quests to solve. You win points by solving those quests. You can work as a team per table and compete with the other tables if you’d like. The image on the left shows few of the available quests. I completed 2 quests of different levels: one very easy (1 out of 5) about Web Resiliency and one more difficult (4 out of 5) about scaling containers in ECS.

The easier quest was well guided and close to the content you can get in a workshop or builder session. There are different tasks with objectives well explained. But the answers are hidden and revealing the hints costs you some of your points earned.

About the “Scaling Fast and Furious” quest, it looks like a scenario. I got only a few lines of text giving the context about the application not reaching the customer expectations. And some vague hints provided by colleagues. But not clear tasks and objectives. I’m happy I finally completed this quest even if I did not had a lot of practical experience with ECS task auto-scaling.

L’article AWS re:Invent 2022 – Day 2 est apparu en premier sur dbi Blog.

In SQL developer compare of two tables

Tom Kyte - Wed, 2022-11-30 10:26
Hi, How to compare the two tables in SQL developer same database which performs the following actions as add, delete, modify Where I can add delete or modify into source table to target table. Can I get example with procedure for the about task please help me out. Thanks Example table A is having old data and table B is having new data so we have compare to tables if any new data comes have to be updated and have to perform add, delete or modify. For this need a procedure with example Please Thanks
Categories: DBA Blogs

Average number of rows over a time period

Tom Kyte - Wed, 2022-11-30 10:26
I find myself needing to generate a data set where I get the average number of rows over a time period. This period needs to be daily, by hour, for the same period for previous days. As an example, get the average number records from 1 am ? 2 am for the last 180 days. In effect count the number of rows on 17 Nov 2022 between 1 am and 2 am, then count the number for rows on 16 Nov between 1 am and 2 am, then count the number of rows on 15 Nov between 1 am and 2 am, etc. Putting the date and time in the WHERE clause is easy enough but is manual. I would like to dynamically create these values for the WHERE clause based on a start date and time passed to the query. Therefore, if the start date and time passed in is 11-17-2022 01:00, an hour is added to get the end hour and then subtract 1 day for the past 180 days to get the average row count for the date range. Is there some function(s) within Oracle that I could use to dynamically create the date and times based on the start date time? Update 23 Nov 2022 Based on some research I've been doing, here is some code I have come up with so far. I thought it best to put this in a procedure. <code> CREATE OR REPLACE PROCEDURE TestAvgATMTransactions ( DaysToLookBack INT, CurrentStartDateTimeInput TIMESTAMP, CurrentEndDateTimeInput TIMESTAMP, PreviousStartDateTimeInput TIMESTAMP, PreviousEndDateTimeInput TIMESTAMP, RTXNTYPCDToCount VARCHAR2(4 BYTE) ) IS BEGIN DECLARE Low_ATM_Tran_Count_PWTH EXCEPTION, TYPE two_cols_rt IS RECORD ( PreviousStartDateTime TIMESTAMP, PreviousEndDateTime TIMESTAMP ); TYPE DateTimesToQuery IS TABLE OF two_cols_rt; StartEndDateTime DateTimesToQuery; PRAGMA EXCEPTION_INIT(Low_ATM_Tran_Count_PWTH, -20001); PWTHCount RTXN.RTXNNBR%TYPE;-- the average transactions during the given period AvgTransactions INT; -- the current average transactions BEGIN -- This does generate the days given how far I want to look back -- however, it does not include the time for some reason -- Need to figure out why as the time is crucial -- (Chris Saxon - https://livesql.oracle.com/apex/livesql/file/content_LIHBDFVF9IUU6AFZB4H6NVLWL.html) SELECT TO_TIMESTAMP (PreviousStartDateTimeInput, 'mm-dd-yyyy hh24:mi') + LEVEL - 1,-- AS StartDateTime, TO_TIMESTAMP (PreviousEndDateTimeInput, 'mm-dd-yyyy hh24:mi') + LEVEL - 1-- AS EndDateTime BULK COLLECT INTO StartEndDateTime FROM DUAL CONNECT BY LEVEL < DaysToLookBack; DBMS_OUTPUT.PUT_LINE (StartEndDateTime.COUNT); END; -- Based on the article Bulk data processing with BULK COLLECT and FORALL in PL/SQL -- by Steven Feuerstein (https://blogs.oracle.com/connect/post/bulk-processing-with-bulk-collect-and-forall) -- FORALL seemes like it would do what I want, but not sure FORALL index IN 1 .. StartEndDateTime.COUNT -- This code needs to get the average for the given RTXNTYPCD for every day -- in my two_cols_rt record -- Yes needs work as this will not give me the overal average of transactions SELECT COUNT(RTXNSOURCECD) INTO PWTHCount FROM RTXN WHERE datelastmaint BETWEEN PreviousStartDateTimeInput AND PreviousEndDateTimeInput AND RTXNSOURCECD = 'ATM' AND RTXNTYPCD RTXNTYPCDToCount; -- end FORALL -- This gets the current count for the given RTXNTYPCD SELECT COUNT(RTXNSOURCECD) INTO AvgTransactions FROM RTXN WHERE datelastmaint BETWEEN CurrentStartDateTimeInput AND CurrentEndDateTimeInput AND RTXNSOURCECD = 'ATM' AND RTXNTYPCD = RTXNTYPCDToCount; -- If the current count for the given RTXNTYPCD -- is less than the average for the period raise the error IF AvgTransactions < PWTHCount THEN RAISE Low_ATM_Tran_Count_PWTH; END IF END; </code> Thank you Michael
Categories: DBA Blogs

Oracle dump into csv file have a empty line at start.

Tom Kyte - Wed, 2022-11-30 10:26
I am executing a sql-plus session to dump a table's data into a .csv file. I need header details along with the data in the dumped csv file. When I use the "set heading on" option, I am getting header details along with the data plus one empty line at the start. I tried multiple options like "set newpage NONE" and others, but first empty file is still coming. Can u pls help to solve this issue? My code snippet is mentioned at below sqlplus -s /nolog << !EOF! WHENEVER OSERROR EXIT 9; WHENEVER SQLERROR EXIT SQL.SQLCODE; connect sys/${DB_PASSWORD}@//${CONN_STR}:${CONN_PORT}/XE as ${DB_USERNAME} --/*************************/-- set echo off set feedback off set linesize 4000 set newpage NONE set pagesize 0 set serveroutput off set termout off set flush off SET NUMWIDTH 5 SET COLSEP "," set markup csv on set sqlprompt '' set trimspool on SET VERIFY OFF SET TERM OFF --/*******************************/-- --/**** Clear Screen & Buffer ****/-- --/*******************************/-- clear screen clear buffer --/*****************************/-- --/**** Start spooling file ****/-- --/*****************************/-- SPOOL $FILE set heading on set pagesize 0 embedded on select * from $table_name; SPOOL OFF EXIT !EOF! Output sh-4.2$ cat aprm_mirrordb_persons_2022_11_16.csv "PERSONID","LASTNAME" 1,"das" 2,"das2" 3,"abc" 4,"def" 5,"testdata only temporary purpose" 6,"testdata only temporary purpose" 7,"testdata only temporary purpose"
Categories: DBA Blogs

Oracle application containers

Tom Kyte - Wed, 2022-11-30 10:26
I have created application root container and two applicatioon PDBs. In application root created a table products ar follows: <code>CREATE TABLE products SHARING=EXTENDED DATA (prod_id NUMBER(4) CONSTRAINT pk_products PRIMARY KEY, prod_name VARCHAR2(15)); ALTER TABLE hr_app_owner.products ENABLE containers_default;</code> If containers_default is enabled on this table, then query (select * from products;) hangs. When containers_default is disabled, query works using CONTAINERS clause (select * from CONTAINERS(products);). Is this is the restriction for SHARING=EXTENDED DATA? I tried to search through documentation without any luck. Thanks in advance.
Categories: DBA Blogs

SQL Query performance issue

Tom Kyte - Wed, 2022-11-30 10:26
<u></u>Dear Team, We have a delete statement that, when executed, runs for infinite time (we even tried to keep it running for more than 3 days and it was still running) until the we killed the session. We are struggling with support since more than a month and they don't have any idea so far, about what could be the possible cause. We need your help to find why/where it is hanging. The delete command is as follow: <code>delete from T where rec_date <=to_date('15-JAN-2013') /</code> Below are the details about table: <code> SQL> select count(*) FROM T; COUNT(*) ------------ 100175652 SQL> ed Wrote file c:\app\afiedt.buf 1 select count(*) FROM T 2* where rec_date <=to_date('15-JAN-2013') SQL> / COUNT(*) ------------ 328786 </code> As per the number of records, it shouldn't run for this much time. The table involved has a child table, too (we have already deleted the records from child table before executing above query). If we re-import (in a different schema) only the parent table (T) and try deleting from that table, it completes in 10 minutes. However, if we re-import parent and child table, the hang behavior can be seen. <b><u>10046 trace</u></b> <code> call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 1 0.03 0.02 0 0 0 0 Execute 1 257.64 1332.39 3575286 1360867 2214317 0 Fetch 0 0.00 0.00 0 0 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 2 257.67 1332.42 3575286 1360867 2214317 0 Misses in library cache during parse: 1 Optimizer mode: ALL_ROWS Parsing user id: 64 Number of plan statistics captured: 1 Rows (1st) Rows (avg) Rows (max) Row Source Operation ---------- ---------- ---------- --------------------------------------------------- 0 0 0 DELETE T (cr=0 pr=0 pw=0 time=0 us starts=8) 21 21 21 TABLE ACCESS FULL T (cr=1360867 pr=1361006 pw=0 time=1990133 us starts=33923681 cost=1990133 size=6508050 card=216935) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ library cache lock 1 0.00 0.00 library cache pin 1 0.00 0.00 row cache lock 5 0.00 0.00 Disk file operations I/O 36 0.00 0.00 db file sequential read 2214297 1.03 1155.11 db file scattered read 10646 0.03 19.67 resmgr:cpu quantum 99 0.10 1.63 gc current grant 2-way 77 0.00 0.13 latch: gc element 16 0.00 0.00 resmgr:internal state change 1 0.09 0.09 latch: gcs resource hash 1 0.00 0.00 latch: object queue header operation 1 0.00 0.00 </code> DDL of parent/child tables (table/constraint name renamed) <code> Create Table T ( Serv_Prov_Code Varchar2(15 Char) Not Null Enable, B1_Con_Nbr Number Not Null Enable, B1_Con_History_Nbr Number Not Null Enable, B1_Con_Genus Varchar2(15 Char) Not Null Enable, B1_Con_Comment Varchar2(4000), B1_Con_Des Varchar2(255 Char), B1_Con_Eff_Dd1 Date, B1_Con_Expir_Dd Date, B1_Con_Impact_Code Varc...
Categories: DBA Blogs

We've added Search!

Rittman Mead Consulting - Wed, 2022-11-30 08:07

Well, to be precise, our blog is hosted on Ghost, and we have just upgraded to version 5.0 which includes search, see here.

0:00
/

What does this mean? Well, now you don't have to use external search engines to find pages on the Rittman Mead blog, you can use our internal search engine.

Categories: BI & Warehousing

Oracle APEX - Social Login

Rittman Mead Consulting - Wed, 2022-11-30 06:21

I looked into Social Sign-in as an option for Oracle APEX a few years ago. This was pre APEX 18.1 and, at this time, it was not simple to configure (in fact it would have taken a considerable amount of code to implement)

Fortunately, since 18.1 APEX offers natively functionality to integrate with Single (Social) Sign on providers and makes the whole process much easier.
This blog will describe the process of getting it up and running and why this might make life easier for both the developer and as an end-user.

Why rely on a 3rd party identity provider (IdP)?

Using a 3rd party to manage and authenticate the users that will be accessing your APEX application offers several potential advantages.

Firstly, delegating such a crucial task as security to an expert in the field in authentication is an inherently sensible idea. It eliminates the need to support (and possibly code for) a password management system inside the APEX application itself.  This relieves an APEX developer of time spent managing users and worrying about the innate security risks that go hand in hand with storing this type of data. Not to mention trying to implement Two-Factor-Authentication (2FA)!

Secondly, from a user's perspective it should provide a better experience, especially if the IdP is chosen carefully based on the application's use. For example, if the application is to reside within an enterprise environment where users are already using Microsoft Azure to authenticate into various services (such as email) then, using the Azure IdP APIs, users could login into APEX with the same username / password. If the APEX application is deployed in a more publicly accessible space on the web, then using a generic IdP like google / facebook will allow you to capture user details more simply, without exposing users to the tedious experience of having to type in (and remember) their details for yet another website to enable them register or pay for something.

Allowing users to login to many systems using a single 3rd party system is sometimes know as federated authentication or single sign on (SSO) and the choice now includes many providers

  • Facebook
  • Microsoft
  • Google
  • Oracle IDCS, Ping, Okta etc
Federated identity - Wikipedia
How do they work?

The protocols IdPs use to authenticate users into client applications (such as APEX) have their roots in Oauth2, which is a standard developed by the web community in 2012 to help websites share users' resources securely. A typical example of a requested resource is when a website (the client) you are registering on wants to access the list of contacts you have in your gmail account (the resource holder) so it can email your friends, in an attempt to get them to register too. Oauth2 allows an authorisation flow where the website can redirect you to a google server (the gmail provider) which will subsequently ask you to authorise this request and, with your consent, then provide an access token back to the original client website, which would be then used to query your contact data securely.

With Oauth2 websites could start sharing all sorts of data with each other, including, commonly, simple user profile data itself (eg name, email address, phone numbers). However, it is important to recognise that Oauth2 is an authorisation rather than an authentication protocol. In 2014 the Oauth2 specification was extended to include OpenId, which specifically deals with the authentication of a user. It is these standards that IdPs use to federate users.

The flow in more detail

The following diagram / points explain the data flow in more detail. In this example we will use a hypothetical set up a client app using Facebook as its IdP. Note that before this can occur the client app will have needed to register itself with Facebook and obtain a client id and client secret which it will need in some of the authentication steps

  1. User attempts to log onto the client. The authentication scheme redirects the user to Facebook authentication end point (with its client id)
  2. User Authenticates onto Facebook (if not logged in already). User prompted to confirm that he trusts the client app (this step is removed the next time the user logs in)
  3. Facebook redirects back to the application with an authorisation code
  4. The client application uses the authorisation code (with its client id and secret) to get and identity token about the user (with various meta data) and that is accepted by the client as a valid authentication
Setting up an example in APEX!

OK, so let’s get an example up and running. For simplicity, I shall run through doing this on the Universal Theme application. We will change access from Public to Social Sign in (OAuth 2.0), create a couple of tables to hold who and when they logged into the application, and then add a report page to the application to detail user access.

This assumes you have the following:

  • An Oracle APEX Cloud account and have a workspace and the Universal Theme application installed (the latest UT application can be downloaded from here).
  • You have created a Google Account which will require a credit card. You may have already used this to create your Free Tier ATP Instance where you have APEX installed and like this scenario, we will need it for Google.  Unless you launch your app to a very large community and start using other Google API features it will be free too. Google will send you alerts if and when you approach the end of the free quota, which resets each month. As an example, Google Maps Matrix API allows 28k calls a month for free.
  • You have some familiarity with APEX and are not a complete beginner. If you are just starting, I would recommend you use one of the many resources now available online to get started.
    Oracle have a lot of information and here is a good starting point.
Overview of what we will build
  1. Open a developer account with Google & setup the Developer Account
  2. Register an application into this google account and generate a client id and secret
  3. Register the client id and secret in an APEX workspace
  4. Create an APEX app and set up a Goole based social sign in Authentication scheme
  5. Create an APEX Authorisation scheme with a number of steps following your demo with done additional explanations on the way were helpful. Maybe just install UT and change that to require Google Auth
  6. Extract the user’s name [maybe add gender, locale and picture] from the Google OAuth 2.0 call and store in app items. (Having the image where the APP_USER is located would be really cool)
  7. Build a few simple tables that hold the user details such as their internal ID, when they last logged in etc
  8. Discuss roles in the app to secure functions for different user types. Create an Admin page and some reporting on the access from the users. Implement this initially with an Authorisation Scheme such as SQL statement of 1=1
  9. Introduce IDCS and roles and demonstrate setting this up with the Administration role
    (do we need a standard role too for a user who comes in with the Google creds but is not an Admin?)
  10. Moving back to the UT application, we will modify the authentication and authorisation to provide this function
The “How To” Build Steps

Finally! Here is the hands-on fun bit!

Open a developer account with Google & setup the Developer Account

Once you have a Google account (you already may have one if you use Gmail) you will need to navigate to the Google developer console.

If you have not done this already enter your details such as your address and mobile phone number. Use a mobile number as it makes life easier with confirmation text messages.

You will need to agree to Google’s terms and conditions and select whether or not to receive email updates. In terms of billing, it is probably a good idea to receive email updates but, in any case, you can opt out of this later if you want to.

You should now get to the following screen:

Every Google API call you want to make will be defined from a “Developer Project”. In doing this, Google makes it nice and easy to control and report on where your API useage is, which is important when have more than one project on the go.
This is useful for demos or switching off access to a system independently of others if you need to so that you have the ability to switch off some usage while leaving others unaffected.

Click “Create Project” and give your project a name. This will be pre-filled and you will need to either be creative or just use the ID as a suffix for example to make it unique.

I’m afraid that now means you can’t use “My Project 50628” as I have below!

Once you have a project, you will have to configure the consent screen.

We’re going to make this available to all users with a Google account so select “External”

Enter details to match your setup:

I have just entered the mandatory fields here. For the domain, just enter the base domain for the site so that my ATP Always Free Tier APEX home page is as follows and the base is bold:

https://xxxxxxxx.oraclecloudapps.com/ords/r/development/ut/home

The domain is oraclecloudapps.com in this example.

A Scopes configuration page is then loaded:

For this example, we are not going to set any scopes here so click
SAVE AND CONTINUE once again.

Finally, we are taken to a “Test users” page. Here you may choose to initially set your access to be limited to yourself & and limited set of users.

Unless you want to do this, click the SAVE AND CONTINUE button again. This project is only for the UT application so we do not mind sharing this without any test users defined as access is normally unrestricted (Public access in the APEX authentication scheme).

The last step is just the summary of the OAuth consent screen where you can double check the entries you have configured so far:

So now we are almost there on the Google side of things. We just need to generate credentials. Click on Credentials and then click CREATE CREDENTIALS, choosing OAuth Client ID in the dropdown menu:

On the next screen we choose a “Web Application” and then fill in the name you wish to assign this set of credentials:

Before you click CREATE, add a redirect URL by pressing the ADD URI button.

All APEX redirects use the same redirect function which is just the part of the URL of your application and then an additional suffix.

Take the URL of any page in your UT Application (you can just run this if you are unsure) and then copy the URL up to the ords section and then add the extra string of:

apex_uthentication.callback

Specifically, in my example this is:
https://xxxxxxxx.oraclecloudapps.com/ords/apex_authentication.callback

Once this is defined, press CREATE and that competes the Google Integration setup.

A dialog will be presented with the following information. This is what we will now set up in APEX for the Universal Theme Application:

Make a note of the client ID and the Client Secret. We will need these when creating the APEX web credentials in the next section. Press OK a final time and you may now review what you have set up on the Google side of things.

Setting up the APEX environment - Web Credentials

The first step here is to define your web credentials in the APEX Workspace itself. Click on the "App Builder>Workspace Utilities > All Workspace Utilities" menu option:

Next, choose “Web Credentials”:

The list of credentials is shown, click on CREATE

Give your Web Credentials a name and enter the Client ID and Client Secret from above repeating the secret in the verify field before saving these details:

This is now available to any application within your APEX workspace. Now we shall use this for our UT application. I am assuming you have installed the UT application or maybe you are setting this up for an application you have developed but you will need an APEX application at this point. If you already have the UT application and use it for reference, you may want to copy it so that you keep one version that does not require authentication via Google OAuth2.

Go to application builder and navigate to the UT application you have installed.

Open it in application builder and then select
Shared Components>Authentication Schemes and then click CREATE.

Select “Based on a pre-configured scheme from the gallery” from the radio group and press NEXT

Now provide a name of your choice and select “Social Sign-in”:

The above page will allow you to specify the following attributes of your Authentication:

  • Name – a meaningful name for your scheme
  • Scheme Type – select Social Sign-in
  • Credential Store – select the one you have just created which is specified at the Workspace level
  • Authentication Provider - here select Google. Note that APEX can integrate with any 3rd party IdP as long as they follow the OpenID protocol. Generic Oauth2 providers may be used as well as long as they support OpenId as an inputted scope. In these scenarios you will have to get information on the API end points for authentication / access token and userinfo.
  • Scope – we could just use email but enter profile here and I will cover how to extract additional attributes from the JSON that is returned with successful login as an extra feature later on
  • Username – here we assign the username to the email address of the Google account user
  • Convert Username to upper case. I select yes here so that I lose case sensitivity for usernames but this is just down to what you intend to do with the user name

The discovery URL is not needed here (as it is pre defined by APEX when you select "Google" as then authentication provider, but is worth mentioning. This URL will provide us with JSON that describes this service in detail. You can examine the response by entering it in a browser:

https://accounts.google.com/.well-known/openid-configuration

Of interest to us in particular in the resulting JSON is the section listing claims_supported:

We shall just use email here for the identifier for the username and shall choose the option of making it uppercase in the application.

Click Create and that will complete the creation and switch the default authentication to your Google Authentication.

If you run your application now, you will see no difference. This is because all the pages are currently public and require no authentication or authorizations. We will change page 100 (the Home Page) now to demonstrate how access can be limited to those users you want to authenticate.

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator