DBA Blogs

DBMS_CRYPTO (or other) to decrypt nimbus JOSE token signed and encrypted with A128CBC_HS256

Tom Kyte - 1 hour 26 min ago
Good afternoon. I have an APEX application that receives a token. This token has been created in JAVA with the nimbus JOSE library. It is first signed with the following method: <code> public static String signer(final String signerKey, final Payload payload) throws JOSEException, ParseException { JWSObject jwsObject = new JWSObject(new JWSHeader(JWSAlgorithm.HS256), payload); final OctetSequenceKey keySigner = OctetSequenceKey.parse(signerKey); JWSSigner jwssigner = new MACSigner(keySigner); jwsObject.sign(jwssigner); return jwsObject.serialize(); }</code> It is subsequently encrypted with the following method: <code> public static String encrypt(final String PayloadString, final String cypherKey) throws ParseException, JOSEException { final Payload payload = new Payload(PayloadString); final JWEObject jweObject = new JWEObject(new JWEHeader.Builder(JWEAlgorithm.DIR, EncryptionMethod.A128CBC_HS256) .contentType("JWT").compressionAlgorithm(CompressionAlgorithm.DEF).build(), payload); final OctetSequenceKey keyEncrypter = OctetSequenceKey.parse(cypherKey); final JWEEncrypter jweEncrypter = new DirectEncrypter(keyEncrypter); jweObject.encrypt(jweEncrypter); return jweObject.serialize(); }</code> I have both the encryption key and the signing key. <b>My question is whether it would be possible to decrypt said token from PL/SQL. If possible, could you tell me how it could be done?</b> I've tried to do it with DBMS_CRYPTO but I haven't succeeded. If it were not possible to do it with PL/SQL code, would it be possible to load the nimbus JOSE JAR file into Oracle to perform the decryption process? Thank you in advance for your attention and help. Greetings
Categories: DBA Blogs

Clear dead transation

Tom Kyte - 19 hours 46 min ago
we have an weird situation that we completed a rolling path on 4-node exadata. patch itself maybe irreverent (OS (Exadata Image to 23.1.17.0.0.240807) and GI (Grid Infrastructure to v19.22). During the process one big transaction was rolled back and now it's stuck on recovering. The session itself was gone from v$session/V$transation views and the following query shows it would finish recovering 22 years later?? We had SR open but oracle insists not to interrupt the recovering process. Is there anyway we can get rid of these recovering processes or the lost parent transaction? (tried to bounce db but it didn't help). <code> SQL> select inst_id,pid,usn,state,undoblockstotal "Total",undoblocksdone "DONE",undoblockstotal-undoblocksdone "ToDo",DECODE(cputime,0,'unknown',to_char(SYSDATE+(((undoblockstotal-undoblocksdone)/(undoblocksdone/cputime))/86400),'DD-MON-YYYY HH24:MI:SS')) "Finish at" FROM gv$fast_start_transactions order by 7 desc; 2 31595 RECOVERING 52835086 7121 52827965 22-SEP-2056 08:27:12 3 31595 RECOVERING 52835907 8120 52827787 31-OCT-2052 20:17:24 4 31595 RECOVERING 52835085 7472 52827613 22-MAR-2055 23:28:17 1 31595 RECOVERING 52835074 234077 52600997 07-SEP-2025 21:04:27 2 31578 RECOVERED 12 12 0 18-SEP-2024 17:36:15 </code>
Categories: DBA Blogs

move partitions between databases, if tables have user defined types

Tom Kyte - 19 hours 46 min ago
Hello, I have following task. We have one big partitioned table and need to move some partition to backup database. The idea was to run in backup database <code>insert into table1_BCK select * from table1@dblink partition (abc) </code> And after insert drop source partition The problem is that we get an error, the reason is that the source table has some columns as user defined type like <code>create type cust_type as varray(13) of number;</code> The same type exist also in backup database, the source and target tables DDLs are exactly the same. Could you suggest some easy solution how to move partition from source to target without need of parsing or retyping these varrays? thanks Jaroslav
Categories: DBA Blogs

How to convert a list of latitudes and longitudes into a sdo_geometry polygon

Tom Kyte - 19 hours 46 min ago
Greetings, I have a table that stores multiple locations (lat and lon) for a project. I want to convert this list of lat and lons into a sdo_geometry so that I can plot a polygon on the map. I created this procedure to insert the sdo_geomety column in a temporary table. But I am getting PL/SQL: numeric or value error: character to number conversion error. Not sure what I am doing wrong here. Any help will be greatly appreciated. Thanks Vatsa. Here is the procedure: create or replace procedure "GEN_POLYGON_FOR_PROJ" ( p_project_number in number ) as l_poly sdo_geometry; l_loc_list varchar2(2000) := NULL; Cursor L (p_proj number) is Select to_char(lon)||','||to_char(lat) as loc FROM PM_PROJECT_LOCATION where proj_id = p_proj ; begin FOR I in L(p_project_number) Loop l_loc_list := l_loc_list ||','|| I.loc ; dbms_output.put_line(l_loc_list) ; END LOOP; dbms_output.put_line(substr(l_loc_list,2)) ; l_poly := mdsys.sdo_geometry(2003, 1041002, null, sdo_elem_info_array(1, 1003, 1), MDSYS.sdo_ordinate_array(substr(l_loc_list,2))) ; delete from VR_PROJECT_POLYGONS where project_number = p_project_number; insert into VR_PROJECT_POLYGONS(PROJECT_NUMBER,PROJ_POLYGON) values (p_project_number,l_poly); EXCEPTION WHEN OTHERS THEN dbms_output.put_line(SQLERRM) ; end "GEN_POLYGON_FOR_PROJ";
Categories: DBA Blogs

Text Search functionality with key word search

Tom Kyte - Tue, 2024-09-24 09:46
Hi Team, We have been studying various options for a text search functionality for our Oracle backend (storage + business logic) based ERP application. One of the key capabilities we expect in text search is to have a key word search. Two main Oracle functionalities in this area that we could go through are, Oracle Text & Oracle AI Vector Search (in 23C). Oracle Text, we have already implemented this in some parts of our application & the required key word search functionality is already there. But in general, with Oracle text, there are some challenges especially when it comes to large data like LOBs with index updating in real time. Index update could be implemented asynchronously, but then we have challenges like search itself does not represent real time data. As per your documentation, Oracle AI Vector Search seemed to be a novel approach which allows to perform an advanced semantic search, by understanding both the meaning & the context behind data. But the supportive documentation does not clearly explain whether it has the option of facilitating a key word based search as well on demand. Does Oracle AI Vector Search supports a key word search? Among Oracle database technologies, what could be the best options now (at a higher level) for implementing a text search with key word searching option? Thanks & Best Regards, Navinth
Categories: DBA Blogs

Issue with Merge statement

Tom Kyte - Mon, 2024-09-23 15:26
Good morning. 1. There is a source table with fields A, B. Primary key is Column A. It has few records. 2. There is a target table with the same fields and data types as the source table. It also has few records. 3. Requirement is to merge the data from Source table to Target table as follows: 3a. If the target table has a record for given source record (based on the primary key), then update the remaining fields of that target record. 3b. If the target table does not have a record for a given source record (based on the primary key), then insert the record into target table. 3c. Delete the records from the Target table for which there is no matching record (based on the primary key) in the Source table. DELETE statement in the MATCHED condition is working only when a specific condition in the UPDATE statement is commented out. It is supposed to work even if that condition is uncommented out. Please try two runs. Once commenting it out, and the other uncommenting it. I would like to understand what am I missing? <code> -- DROP TABLE target_tab; -- DROP TABLE source_tab; CREATE TABLE source_tab(a NUMBER PRIMARY KEY, b NUMBER); INSERT INTO source_tab(a, b) VALUES (1, 1); INSERT INTO source_tab(a, b) VALUES (2, 2); INSERT INTO source_tab(a, b) VALUES (3, 3); COMMIT; CREATE TABLE target_tab(a NUMBER PRIMARY KEY, b NUMBER); INSERT INTO target_tab(a, b) VALUES (1, 2); INSERT INTO target_tab(a, b) VALUES (2, 2); INSERT INTO target_tab(a, b) VALUES (4, 4); INSERT INTO target_tab(a, b) VALUES (5, 5); COMMIT; -- Merge: DELETE statement in the MATCHED condition is working only when the specific line in the UPDATE statement is commented out begin. MERGE INTO target_tab tgt USING (SELECT NVL(s.a, t.a) a_whichever_is_not_null, s.a s_a, s.b s_b, t.a t_a, t.b t_b FROM source_tab s FULL JOIN target_tab t ON (s.a = t.a)) fojv ON (fojv.a_whichever_is_not_null = tgt.a) WHEN MATCHED THEN UPDATE SET tgt.b = fojv.s_b WHERE fojv.t_a IS NOT NULL -- AND fojv.s_a IS NOT NULL DELETE WHERE fojv.s_a IS NULL AND fojv.t_a IS NOT NULL WHEN NOT MATCHED THEN INSERT (tgt.a, tgt.b) VALUES (fojv.s_a, fojv.s_b); -- Merge: DELETE statement in the MATCHED condition is working only when the specific line in the UPDATE statement is commented out end. </code>
Categories: DBA Blogs

How to refer to a column in select list not by its name?

Tom Kyte - Thu, 2024-09-19 20:26
Hi Tom, Can I use column number instead of name in a select list? Because I have a list of tables need to access by the first column in a loop. The first column names are different. I like to use the same select statement for each table. is it doable? If yes, could you show me the syntax or direct me to some docomentation? Thanks.
Categories: DBA Blogs

PASSWORD_ROLLOVER_TIME - check actual expiry date

Tom Kyte - Thu, 2024-09-19 20:26
Hi "Tom", A question regarding the Gradual Password Rollover feature. I'm running Oracle Enterprise 19.24 on Linux. Consider this example, assuming I run it on September 1st: <code>create profile PWROLLOVER limit password_rollover_time 10; alter user SCOTT profile PWROLLOVER; alter user SCOTT identified by "Lion2024";</code> This means that Scott will be able to use the old and new password until September 10th, after that the old password will expire and only the new one will work. I review this date by checking PASSWORD_CHANGE_DATE from DBA_USERS and the respective LIMIT from DBA_PROFILES. So far so good. Now consider this, executed on September 5th: <code>alter profile PWROLLOVER limit password_rollover_time 30; </code> To my knowledge, the expiry date of the old password is set when it is <b>changed</b>, so it will remain Sep. 10th. Q1: Is this correct? Q2: How/where can I see the actual expiry date for Scott after the profile change? Thanks! Bjoern
Categories: DBA Blogs

Resources on Business Logic Architecture in Oracle Database

Tom Kyte - Thu, 2024-09-19 02:06
Hi, Can you recommend any resources or books focused on business logic architecture specifically for Oracle Database? I am looking to understand best practices for implementing business logic, whether in the database layer through PL/SQL or through application design strategies. Any guidance or recommended reading would be greatly appreciated.
Categories: DBA Blogs

unable to move database in read write mode it is in readonly mode

Tom Kyte - Thu, 2024-09-19 02:06
1. i installed Oracle Database 23ai Free on my windows 11 lab top and install oracle apex 24.1 ords 24.2 and tomcat. its working fine 2. after 15 days listener is working stop (machine shutdown due to low battery and after charging ). i shutdown database and restart database and listener. my both root and pluggable container are in readonly mode. attached the screen shoot https://drive.google.com/file/d/1LjO8yjg2lrKrUvVgWJiqOUDRHi9m03yp/view?usp=sharing 3. i restart listener and this time it shows the service in readonly mode attached screen shoot https://drive.google.com/file/d/16ae5SHkCTy1h42t6xfPxb9lD1bVI9eqA/view?usp=sharing 4. the alert_free.log attached herewith https://drive.google.com/file/d/1MWedFUcg1tQWnMCChfB2GAeOMbtcBys9/view?usp=sharing i tried all possible solution available after finding on net but not succussed. please guide me
Categories: DBA Blogs

Easiest RAG Tutorial for Beginners on Free Google Colab

Pakistan's First Oracle Blog - Wed, 2024-09-18 16:29

 This video is a step-by-step tutorial to learn RAG in an easy way with LlamaIndex on your own data in free google colab.



Code:



!pip install llama-index faiss-cpu pandas python-dotenv openai transformers numpy
!pip install llama-index-agent-openai llama-index-cli llama-index-core llama-index-embeddings-openai
!pip install llama-index-llms-openai llama-index-program-openai llama-index-question-gen-openai llama-index-readers-file
!pip install llama-index-readers-llama-parse llama-index-vector-stores-faiss llama-parse llama-index-indices-managed-llama-cloud

from llama_index.core.readers import SimpleDirectoryReader
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.readers.file import PagedCSVReader
from llama_index.vector_stores.faiss import FaissVectorStore
from llama_index.core.ingestion import IngestionPipeline
from llama_index.core import VectorStoreIndex
import faiss
import os
import pandas as pd

from google.colab import userdata
os.environ['OPENAI_API_KEY']=userdata.get('OPENAI_API_KEY')

EMBED_DIMENSION=512
Settings.llm = OpenAI(model="gpt-3.5-turbo")
Settings.embed_model = OpenAIEmbedding(model="text-embedding-3-small", dimensions=EMBED_DIMENSION)

file_path = ('/content/minifinance.csv')
data = pd.read_csv(file_path)
data.head()

fais_index = faiss.IndexFlatL2(EMBED_DIMENSION)
vector_store = FaissVectorStore(faiss_index=fais_index)

csv_reader = PagedCSVReader()

reader = SimpleDirectoryReader(
    input_files=[file_path],
    file_extractor= {".csv": csv_reader}
    )

docs = reader.load_data()

print(docs[0].text)

pipeline = IngestionPipeline(
    vector_store=vector_store,
    documents=docs
)

nodes = pipeline.run()

vector_store_index = VectorStoreIndex(nodes)
query_engine = vector_store_index.as_query_engine(similarity_top_k=2)

response = query_engine.query("which products are sold in Canada?")
response.response
Categories: DBA Blogs

Dbms_Flashback

Tom Kyte - Tue, 2024-09-17 13:26
Tom 1.Are flash back queries useful for the developer or the DBA. 2.I observe that one needs the SCN number to do a flash back query. How can I as a developer get to know the SCN number of a transaction. 3.A single transaction can have multiple deletes and a single SCN number identifying all of these deletes. What if I want to flash back only a single individual delete. Can yu explain with an example of flashback query. Thank you
Categories: DBA Blogs

Fetching data slow with 1 million row on oracle

Tom Kyte - Tue, 2024-09-17 13:26
i want to retrieve data with 1 million row on oracle database, it slow and take time over 1:30 hour. --> select * from MISADMIN.FBNK_STMT_ENTRY_ALL where BOOKING_DATE = '20240630'
Categories: DBA Blogs

EDITIONS EBR

Tom Kyte - Thu, 2024-09-05 09:26
It took me a while pick database development over dabase administration, concerning a editions. My questions is simple, if I've decided to create an editioning view over a table with N triggres, what would I do about this?? some people says all triggers must be moved to the editioning view. Other says if there are trigger oriented to update auditing column (created by, date of creation, updated, etc) or update surrogated key columns (pre-insert), this kind of triggers must remain in the table. Kind regards Mauricio
Categories: DBA Blogs

A problem when ranking regression slopes over a database link

Tom Kyte - Thu, 2024-09-05 09:26
Dear Colleagues, I have come across a problem when ranking regression slopes over a database link. I have a table containing dates and file volumes for downloads from several sources. I want to make a Top N for the sources most growing in their volumes. For this, I calculate slopes for all the sources using regr_slope() function and then rank the slopes using row_number() over (order by slope desc nulls last) as rn_top. Then I want to select rows where rn_top <= :n. The results of the query obtained over a database link differ from the results obtained locally: locally obtained results are correct, remote ones demonstrate strange behavior. Regression is correct, ranking is correct; select * from (?ranking?) is NOT correct even without any ?where? condition ? the slopes are correct, but their ranking is not. This effect does not take place for constants, nor for max/min/avg functions ? only for regr_* functions. The effect is stable for two versions of remote servers: Oracle Database 19c Enterprise Edition and Oracle Database 11g Express Edition, and Oracle Database 19c Enterprise Edition wherefrom the database link goes. For reproducing: <code>CREATE TABLE "TEST_003" ("TASK_SOURCE_ID" NUMBER(6,0), "TASK_DATE_DOWNLOADED" DATE, "TASK_FILE_SIZE" VARCHAR2(128)) ; /* historically task_file_size is varchar in original table */ Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('26.08.24','DD.MM.RR'),'8266'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114657',to_date('26.08.24','DD.MM.RR'),'6925'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('26.08.24','DD.MM.RR'),'8783'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('26.08.24','DD.MM.RR'),'6590'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('26.08.24','DD.MM.RR'),'7204'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('122994',to_date('26.08.24','DD.MM.RR'),'59904'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('120116',to_date('27.08.24','DD.MM.RR'),'35125'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('27.08.24','DD.MM.RR'),'8226'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114658',to_date('27.08.24','DD.MM.RR'),'8784'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114659',to_date('27.08.24','DD.MM.RR'),'6591'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114660',to_date('27.08.24','DD.MM.RR'),'7206'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNLOADED,TASK_FILE_SIZE) values ('114656',to_date('28.08.24','DD.MM.RR'),'8230'); Insert into TEST_003 (TASK_SOURCE_ID,TASK_DATE_DOWNL...
Categories: DBA Blogs

Extract a Node value from XML stored under BLOB in Oracle Tables has row 266 million rows

Tom Kyte - Thu, 2024-09-05 09:26
Hello Tom, I have a table which is having around 266573283 rows. In that table i have BLOB column which stores a xml data. In that column i need to parse the xml and then get the value of on node. This node may or may not exists in all the rows. I have tried using python (cx_oracle package). The table has been partitioned on the date(Month and Year) the rows has been stored. I'm not able to process the xml. Is there a way or an approach to process the xml data Note: This table will not grow anymore due to application shutdown. 2018 3543136 2019 3369956 2020 7576397 2021 82413536 2022 123216864 2023 46453394 Thanks & Regards, Mani R
Categories: DBA Blogs

Big PL/SQL block passes wrong parameters for function call

Tom Kyte - Thu, 2024-09-05 09:26
During executing a big PL/SQL blocks, some of the numeric parameters get lost/replaced with an earlier value. There is no error message, the PL/SQL block executes, but some of the function calls get the wrong parameter values. See the LiveSQL link: it defines a function which compares its input parameters, and calls it some 11 thousand times, and at the 10932th call, it gets mismatched values. The problem seemingly occurs above a certain program size, but it seemingly didn't reach the diana nodes limit, there is no ORA-00123, the program executes, but stores the wrong data in the database. Is there a size limit I bump into or can it possibly be an Oracle bug (that reproduces on multiple instances and different Oracle versions?)
Categories: DBA Blogs

How to connect SQL Developer using oracle wallet external password store

Tom Kyte - Wed, 2024-09-04 15:06
Hi, How can i connect to SQL developer using oracle wallet based security. from sql plus we can connect using the command like /@sid, but how can i connect to SQL developer (or) toad using the wallet based security. Thanks in advance.
Categories: DBA Blogs

Dynamic Sql

Tom Kyte - Wed, 2024-09-04 15:06
Hi, thank you for taking my question, I wanted to know the logic behind not being able to reference :new and :old variables in dynamic sql execute immediate 'insert into tablea (columna, columnb) values (:old.columna,:new.columnb)'; this will fail just trying to understand the logic behind it. thank you for you response PS constant browser on on your site loven it!
Categories: DBA Blogs

Object Type Variables in Bulk Collect and FORALL

Tom Kyte - Wed, 2024-09-04 15:06
I was reading the following discussion about using Object Types https://asktom.oracle.com/ords/f?p=100:11:607033327783906::::: https://docs.oracle.com/cd/B10501_01/appdev.920/a96624/10_objs.htm But, I couldnt understand the object types usage in packages (mainly in the bulk processing). Is it possible to use objects in bulk processing? if yes, can you please help with an example?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs