Hi,This is Ryotaro
I want to know about Postgres setting for PWH on DT6.5.
I want to know the Instance will make automatically or not.
I already done
■make owner user
If I have to make instance or db, How should I setting??
I'm so sorry for asking about Postgres, but I don't have any knowledge of DB making...
What happens when you do not specify capacity when creating create database?
I could create database command with out specify capacity and it could connect from DTclient
the instance will be expanded automatically?? or not??
I'm happy if I get an answer soon. I need to explane to our customer tomorrow...
you have to create a "database". I use the pgadmin, right-click on "Databases" -> new Database...
Set the owner to your newly created user.
As encoding, use UTF8 (if possible).
Other than that, no need to change the default settings.
I've never had an issue with tablespace capacity in postgresql, and although I don't know for sure myself, I would say that it can grow up to the disk limit in size.
If you don't have pgadmin access, you could execute something like this in the command line:
CREATE DATABASE <db_name> ENCODING 'UTF8' OWNER <user_name>
However, all this doesn't take into account anything regarding performance. If you discover that the db doesn't perform as well as you'd expect it to, you need to tune it, but that's normally a topic for the DBA (like setting up regular VACUUM/ANALYZE intervals or configure memory parameters etc).
Let me know if you need more details, thanks,
Thank you so much for your answer!
however I want to more Information for explain to our customer.
When I making DB as PWH,Usually which parameters Should I tuning? and for what?
unfortunately I'm not in a position to help you there very much, and I doubt there's anyone in the dev team here who is. What you need for this is a DBA, and we aren't that.
I can only tell you what I believe to be the idiosyncrasies of appmon regarding the usage of the db:
1. Dynatrace is very "write"-heavy. If possible, try to maximize write i-o by setting the buffer sizes accordingly and by using fast disks.
2. The most critical table is measurement_high (for large systems at least). Observe it closely and set up frequent vacuum / analyze cycles for it.
3. The next tier in importance would be dynamic_measure, percentiles_high, incidentrecords.
There's also a partitioning script for postgres, use it if the clean-up task takes too long.
Sorry, but I can't help you much with performance tuning your database, best regards,