Feed aggregator
Microk8s: publishing the dashboard (reachable from remote/internet)
If you enable the dashboard on a microk8s cluster (or single node) you can follow this tutorial: https://microk8s.io/docs/addon-dashboard
The problem is, the command
microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard 10443:443
has to be reexecuted every time you restart your node, which you use to access the dashboard.
A better configuration can be done this way: Run the following command and change
type: ClusterIP --> type: NodePort
# Please edit the object below. Lines beginning with a '#' will be ignored,kubectl -n kube-system edit service kubernetes-dashboard
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
creationTimestamp: "2021-01-22T21:19:24Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "3599"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: 19496d44-c454-4f55-967c-432504e0401b
spec:
clusterIP: 10.152.183.81
clusterIPs:
- 10.152.183.81
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}Then run
root@ubuntu:/home/ubuntu# kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.152.183.81 <none> 443:30713/TCP 4m14s
After that you can access the dashboard over the port which is given behind the 443: - in my case https://zigbee:30713
Eleven Table Tennis: Basics
Assuming you are an IRL player who wants to get as close to the real thing as possible, that’s what I’d recommend:
Make sure you have enough space to play
The green box is your playing space. It should be a square of 2.50 m X 2.50 m ideally. Make sure to leave some space at the front, so you can reach balls close to the net and even a little across the net. Otherwise you may become a victim of ghost serves. Leave enough room at the sides – some opponents play angled, just like IRL.
If you don’t have enough space for this setup – maybe you shouldn’t play multiplayer mode then. You can still have fun, playing against the ballmachine or against the AI. Actually, I think it’s worth the money even in that case.
Use the discord channelThe Eleven TT community is on this discord channel: https://discord.gg/s8EbXWG
I recommend you register there and use the same or a similar name as the name you have in the game. For example, I’m Uwe on discord and uwe. in the game (because the name uwe was already taken). This is handy to get advice from more experienced players, also the game developers are there. They are very responsive and keen to improve Eleven TT even more, according to your feedback.
There’s a preview version presently, that has improved tracking functionality. You can just ask the developers there to get you this preview version. I did, and I find it better than the regular version, especially for fast forehand strokes.
Setup your paddleWhen you have the Sanlaki paddle adapter (as recommended in the previous post), go to the menu and then to Paddle Settings:

Click on Paddle Position and select the Sanlaki Adapter:

As an IRL player, you may start with an Advanced Paddle Surface:

Se how that works for you. Bounciness translates to the speed of your blade. An OFF ++ blade would be maximum bounciness. Spin is self-explaining. You have no tackiness attribute, though. Throw Coefficient translates to the sponge thickness. The higher that value, the thicker the sponge.
ServingThis takes some time to get used to. You need to press the trigger on the left controller to first “produce” a ball, then you throw it up and press the trigger again to release the ball. Took me a while to practice that and still sometimes I fail to release the ball as smoothly as I would like to.
What I like very much: You have a built-in arbiter, who makes sure your serve is legal according to the ITTF rules. That is applied for matches in multiplayer mode as well as for matches in single player mode. But not in free hit mode! Check out the Serve Practice:

It tells you what went wrong in case:


I recommend you practice with the AI opponent in single player mode for a while. It has spin lock on per default, which means it will never produce any side spin. I find that unrealistic. After some practicing against the AI in single player mode, you’re ready for matches in multiplayer mode against other human opponents.
Microk8s: No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml' while joining a cluster
Kubernetes cluster with microk8s on raspberry pi
If you want to join a node and you get the following error:
microk8s join 192.168.178.57:25000/6a3ce1d2f0105245209e7e5e412a7e54Contacting cluster at 192.168.178.57
Traceback (most recent call last):
File "/snap/microk8s/1908/scripts/cluster/join.py", line 967, in <module>
join_dqlite(connection_parts)
File "/snap/microk8s/1908/scripts/cluster/join.py", line 900, in join_dqlite
update_dqlite(info["cluster_cert"], info["cluster_key"], info["voters"], hostname_override)
File "/snap/microk8s/1908/scripts/cluster/join.py", line 818, in update_dqlite
with open("{}/info.yaml".format(cluster_backup_dir)) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/var/snap/microk8s/1908/var/kubernetes/backend.backup/info.yaml'
This error happens, if you have not enabled dns on your nodes.
So just run "microk8s.enable dns" on every machine:
microk8s.enable dns
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node 192.168.178.57
Adding argument --cluster-dns to nodes.
Configuring node 192.168.178.57
Restarting nodes.
Configuring node 192.168.178.57
DNS is enabled
And after that the join will work like expected:
root@ubuntu:/home/ubuntu# microk8s join 192.168.178.57:25000/ed3f57a3641581964cad43f0ceb2b526
Contacting cluster at 192.168.178.57
Waiting for this node to finish joining the cluster. ..
root@ubuntu:/home/ubuntu# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ubuntu Ready <none> 3m35s v1.20.1-34+97978f80232b01
zigbee Ready <none> 37m v1.20.1-34+97978f80232b01
Google Cloud Services and Tools
Google Cloud Services is a set of Computing, Networking, Storage, Big Data, Machine Learning, and Management services offered by Google which runs on the same cloud infrastructure that Google uses internally for YouTube, Gmail, and other end-user products. Want to know more about the tools and services offered by Google Cloud? Read the blog post […]
The post Google Cloud Services and Tools appeared first on Oracle Trainings for Apps & Fusion DBA.
Introduction To Amazon Lex | Conversational AI for Chatbots
Amazon Lex is a service for building conversational interfaces into any application using voice and text. Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR) for converting speech to text, and natural language understanding (NLU) to recognize the intent of the text, to enable you to build applications with highly engaging […]
The post Introduction To Amazon Lex | Conversational AI for Chatbots appeared first on Oracle Trainings for Apps & Fusion DBA.
Introduction To Amazon SageMaker Built-in Algorithms
Amazon SageMaker provides a suite of built-in algorithms to help data scientists and machine learning practitioners get started on training and deploying machine learning models quickly. Want to know more about the Amazon SageMaker Built-in Algorithms? Read the blog post at https://k21academy.com/awsml12 to learn more. The blog post covers: • What Is Amazon SageMaker and […]
The post Introduction To Amazon SageMaker Built-in Algorithms appeared first on Oracle Trainings for Apps & Fusion DBA.
Partner Webcast – Hitchhikers Guide to Oracle Cloud (Part 2)
We share our skills to maximize your revenue!
Announcing SLOB 2.5.3
This is just a quick blog post to inform readers that SLOB 2.5.3 is now available at the following webpage: click here.
SLOB 2.5.3 is a bug fix release. One of the fixed bugs has to do with how SLOB sessions get connected to RAC instances. SLOB users can surely connect to the SCAN service but for more repeatable testing I advise SLOB 2.5.3 and SQL*Net services configured one per RAC node. This manner of connectivity establishes affinity between schemas and RAC nodes. For example, repeatability is improved if sessions performing SLOB Operations against, say, user7’s schema, it is beneficial to do so connected to the same RAC node as you iterate through your testing.
The following is cut and pasted from SLOB/misc/sql_net/README:
The tnsnames.ora in this directory offers an example of
service names that will allow the user to test RAC with
repeatable results. Connecting SLOB sessions to the round
robin SCAN listener will result in SLOB sessions connecting
to random RAC nodes. This is acceptable but not optimal and
can result in varying run results due to slight variations
in sessions per RAC node from one test to another.
As of SLOB 2.5.3, runit.sh uses the SQLNET_SERVICE_BASE and
SQLNET_SERVICE_MAX slob.conf parameters to sequentially
affinity SLOB threads (Oracle sessions) to numbered service
names. For example:
SQLNET_SERVICE_BASE=rac
SQLNET_SERVICE_MAX=8
With these assigned values, runit.sh will connect the first
SLOB thread to rac1 then rac2 and so forth until rac8 after
which the connection rotor loops back to rac1. This manner
of RAC affinity testing requires either a single SLOB
schema (see SLOB Single Schema Model in the documentaion)
or 8 SLOB schemas to align properly with the value assigned
to SQLNET_SERVICE_MAX. The following command will connect
32 SLOB threads (Oracle sessions) to each RAC node in an
8-node RAC configuration given the tnsnames.ora example
file in this directory:
$ sh ./runit.sh -s 8 -t 32
Find sku_no values from the table which does not have any records for ven_type='P'
Troubleshooting heavy hash joins
Spooling data to .csv file via SQL Plus
Datapump in Oracle ADB using SQL Developer Web
If you have a small schema in the Oracle Cloud Autonomous Database, you can actually run DataPump from SQL Developer Web. DATA_PUMP_DIR maps to a DBFS mount inside the Oracle Database.
Logged in to my Oracle ADB as "ADMIN"
I check if DATA_PUMP_DIR exists and I find that it is in dbfs :
I run a PLSQL Block to export the HEMANT schema using the DBMS_DATAPUMP API :
After I drop the two tables in the schema, I run the import using the DBMS_DATAPUMP API and then refresh the list of Tables owned by "HEMANT" :
This method is a quick way of using the Autonomous Database itself when you don't have an external Object Store to hold the Datapump file. So, I'd use this only for very small schemas as the dump is itself loaded into DBFS.
The PLSQL Code is :
REM Based on Script posted by Dick Goulet, posted to oracle-l@freelists.org
REM With modifications by me.
REM Hemant K Chitale
REM Export schema "HEMANT"
declare
h1 NUMBER := 0;
h2 varchar2(1000);
ex boolean := TRUE;
fl number := 0;
schema_exp varchar2(1000) := 'in(''HEMANT'')';
f_name varchar2(50) := 'My_DataPump';
dp_mode varchar2(100) := 'export';
blksz number := 0;
SUCCESS_WITH_INFO exception;
begin
utl_file.fgetattr('DATA_PUMP_DIR', dp_mode||'.log', ex, fl, blksz);
if(ex = TRUE) then utl_file.fremove('DATA_PUMP_DIR',dp_mode||'.log');
end if;
h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => upper(dp_mode)||'_EXP', version => 'COMPATIBLE');
dbms_datapump.set_parallel(handle => h1, degree => 2);
dbms_datapump.add_file(handle => h1, filename => f_name||'.dmp%U', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => f_name||'.log', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
dbms_datapump.metadata_filter(handle=>h1, name=>'SCHEMA_EXPR',value=>schema_exp);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.wait_for_job(handle=>h1, job_state=>h2);
exception
when SUCCESS_WITH_INFO THEN NULL;
when others then
h2 := sqlerrm;
if(h1 != 0) then dbms_datapump.stop_job(h1,1,0,0);
end if;
dbms_output.put_line(h2);
end;
REM Import schema "HEMANT"
declare
h1 NUMBER := 0;
h2 varchar2(1000);
ex boolean := TRUE;
fl number := 0;
schema_exp varchar2(1000) := 'in(''HEMANT'')';
f_name varchar2(50) := 'My_DataPump';
dp_mode varchar2(100) := 'import';
blksz number := 0;
SUCCESS_WITH_INFO exception;
begin
utl_file.fgetattr('DATA_PUMP_DIR', dp_mode||'.log', ex, fl, blksz);
if(ex = TRUE) then utl_file.fremove('DATA_PUMP_DIR',dp_mode||'.log');
end if;
h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'SCHEMA', job_name => upper(dp_mode)||'_EXP');
dbms_datapump.set_parallel(handle => h1, degree => 2);
dbms_datapump.add_file(handle => h1, filename => f_name||'.dmp%U', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);
dbms_datapump.add_file(handle => h1, filename => f_name||'.log', directory => 'DATA_PUMP_DIR', filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value=>'SKIP');
dbms_datapump.metadata_filter(handle=>h1, name=>'SCHEMA_EXPR',value=>schema_exp);
dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
dbms_datapump.wait_for_job(handle=>h1, job_state=>h2);
exception
when SUCCESS_WITH_INFO THEN NULL;
when others then
h2 := sqlerrm;
if(h1 != 0) then dbms_datapump.stop_job(h1,1,0,0);
end if;
dbms_output.put_line(h2);
end;
Again, I emphasise that this is only for small dumps.
Oracle 19c Automatic Indexing: Non-Equality Predicates Part II (Let’s Spend The Night Together)
MicroK8s: Kubernetes on raspberry pi - get nodes= NotReady
On my little kubernetes cluster with microK8s
i got this problem:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
zigbee NotReady <none> 59d v1.19.5-34+b1af8fc278d3ef
ubuntu Ready <none> 59d v1.19.6-34+e6d0076d2a0033
The solution was:
kubectl describe node zigbee
and in the output i found:
Events:Hmmm - so running additional databases, processes outside of kubernetes is not such a good idea.
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 18m kube-proxy Starting kube-proxy.
Normal Starting 14m kubelet Starting kubelet.
Warning SystemOOM 14m kubelet System OOM encountered, victim process: influx, pid: 3256628
Warning InvalidDiskCapacity 14m kubelet invalid capacity 0 on image filesystem
Normal NodeHasNoDiskPressure 14m (x2 over 14m) kubelet Node zigbee status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 14m (x2 over 14m) kubelet Node zigbee status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 14m (x2 over 14m) kubelet Node zigbee status is now: NodeHasSufficientMemory
But as a fast solution: I ejected the SD card and did a resize + add swap on my laptop and put the SD card back to the raspberry pi...
Need help working with PL/SQL FOR LOOP
Historical question about the definition of the constraining table in the Oracle documentation
Table with LONG data type not being freed
Hint Errors
This is a list of possible explanations of errors that you might see in the Hint Report section of an execution plan. It’s just a list of the strings extracted from a chunk of the 19.3 executable around the area where I found something I knew could be reported, so it may have some errors and omissions – but there are plenty of things there that might give you some idea why (in earlier versions of Oracle) you might have seen Oracle “ignoring” a hint:
internally generated hint is being cleared hint conflicts with another in sibling query block hint overridden by another in parent query block conflicting optimizer mode hints duplicate hint all join methods are excluded by hints index specified in the hint doesn't exist index specified in hint cannot be parallelized incorrect number of indexes for AND_EQUAL partition view set up FULL hint is same as INDEX_FFS for IOT access path is not supported for IOT hint on view cannot be pushed into view hint is discarded during view merging duplicate tables in multi-table hint conditions failed for array vector read same QB_NAME hints for different query blocks rejected by IGNORE_OPTIM_EMBEDDED_HINTS specified number must be positive integer specified number must be positive number specified number must be >= 0 and <= 1 hint is only valid for serial SQL hint is only valid for slave SQL hint is only valid for dyn. samp. query hint is only valid for update join ix qry opt_estimate() without value list opt_estimate() with conflicting values spec hint overridden by NO_QUERY_TRANSFORMATION hinted query block name is too long hinted bitmap tree wasn't fully resolved bitmap tree specified was invalid Result cache feature is not enabled Hint is valid only for select queries Hint is not valid for this query block Hint cannot be honored Pred reorder hint has semantic error WITH_PLSQL used in a nested query ORDER_SUBQ with less than two subqueries conflicting OPT_PARAM hints conflicting optimizer_feature_enable hints because of _optimizer_ignore_parallel_hints conflicting JSON_LENGTH hints
CBO Example
A little case study based on an example just in on the Oracle-L list server. This was supplied with a complete, working, test case that was small enough to understand and explain very quickly.
The user created a table, and used calls to dbms_stats to fake some statistics into place. Here, with a little cosmetic editing, is the code they supplied.
set serveroutput off set linesize 180 set pagesize 60 set trimspool on drop table t1 purge; create table t1 (id number(20), v varchar2(20 char)); create unique index pk_id on t1(id); alter table t1 add (constraint pk_id primary key (id) using index pk_id enable validate); exec dbms_stats.gather_table_stats(user, 't1'); declare srec dbms_stats.statrec; numvals dbms_stats.numarray; charvals dbms_stats.chararray; begin dbms_stats.set_table_stats( ownname => user, tabname => 't1', numrows => 45262481, numblks => 1938304, avgrlen => 206 ); numvals := dbms_stats.numarray (1, 45262481); srec.epc:=2; dbms_stats.prepare_column_values (srec, numvals); dbms_stats.set_column_stats ( ownname => user, tabname => 't1', colname => 'id', distcnt => 45262481, density => 1/45262481, nullcnt => 0, srec => srec, avgclen => 6 ); charvals := dbms_stats.chararray ('', ''); srec.epc:=2; dbms_stats.prepare_column_values (srec, charvals); dbms_stats.set_column_stats( ownname => user, tabname => 't1', colname => 'v', distcnt => 0, density => 0, nullcnt => 45262481, srec => srec, avgclen => 0 ); dbms_stats.set_index_stats( ownname => user, indname =>'pk_id', numrows => 45607914, numlblks => 101513, numdist => 45607914, avglblk => 1, avgdblk => 1, clstfct => 33678879, indlevel => 2 ); end; / variable n1 nvarchar2(32) variable n2 number begin :n1 := 'D'; :n2 := 50; end; / select /*+ gather_plan_statistics */ * from ( select a.id col0,a.id col1 from t1 a where a.v = :n1 and a.id > 1 order by a.id ) where rownum <= :n2 ; select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost peeked_binds '));
From Oracle’s perspective the table has 45M rows, with a unique sequential key starting at 1 in the id column. The query looks like a pagination query, asking for 50 rows, ordered by id. But the in-line view asks for rows where id > 1 (which, initiall, means all of them) and applies a filter on the v column.
Of course we know that v is always null, so in theory the predicate a.v = :n1 is always going to return false (or null, but not true) – so the query will never return any data. However, if you read the code carefully you’ll notice that the bind variable v has been declared as an nvarchar2() not a varchar2().
Here’s the exection plan I got on an instance running 19.3 – and it’s very similar to the plan supplied by the OP:
---------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | ---------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 3747 (100)| 0 |00:00:00.01 | |* 1 | COUNT STOPKEY | | 1 | | | 0 |00:00:00.01 | | 2 | VIEW | | 1 | 50 | 3747 (1)| 0 |00:00:00.01 | |* 3 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 452K| 3747 (1)| 0 |00:00:00.01 | |* 4 | INDEX RANGE SCAN | PK_ID | 0 | 5000 | 14 (0)| 0 |00:00:00.01 | ---------------------------------------------------------------------------------------------------- Peeked Binds (identified by position): -------------------------------------- 2 - :2 (NUMBER): 50 Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(ROWNUM<=:N2) 3 - filter(SYS_OP_C2C("A"."V")=:N1) 4 - access("A"."ID">1)
The question we were asked was this: “Why does the optimizer estimate that it will return 5,000 entries from the index range scan at operation4?”
The answer is the result of combining two observations.
First: In the Predicate Information you can see that Oracle has applied a character-set conversion to the original predicate “a.v = :n1” to produce filter(SYS_OP_C2C(“A”.”V”)=:N1). The selectivity of “function of something = bind value” is one of those cases where Oracle uses one of its guesses, in this case 1%. Note that the E-rows estimate for operation 3 (table access) is 452K, which is 1% of the 45M rows in the table.
In real life if you had optimizer_dynamic_sampling set at level 3, or had added the hint /*+ dynamic_sampling(3) */ to the query, Oracle would sample some rows to avoid the need for guessing at this point.
Secondly: the optimizer has peeked the bind variable for the rownum predicate, so it is optimizing for 50 rows (basically doing the arithmetic of first_rows(50) optimisation). The optimizer “knows” that the filter predicate at the table will eliminate all but 1% of the rows acquired, and it “knows” that it has to do enough work to find 50 rows in total – so it can calculate that (statistically speaking) it has to walk through 5,000 (= 50 * 100) index entries to visit enough rows in the table to end up with 50 rows.
Next Steps (left as exercise)Once you’ve got the answer to the question “Why is this number 5,000?”, you might go back and point out that the estimate for the table access was 95 times larger than the estimate for the number of rowids selected from the index and wonder how that could be possible. (Answer: that’s just one of the little defects in the code for first_rows(n).)
You might also wonder what would have happened in this model if the bind variable n1 had been declared as a varchar2() rather than an nvarchar2() – and that might have taken you on to ask yet another question about what the optimizer was playing at.
Once you’ve modelled something that is a little puzzle there’s always scope for pushing the model a little further and learning a little bit more before you file the model away for testing on the next version of Oracle.
Question about sequence with lower nextval than column
Pages
