Quantcast
Channel: All Data Protector Practitioners Forum posts
Viewing all 10494 articles
Browse latest View live

Re: Advanced GRE plugin Vcenter 5.5.0 U2 Data Protector 9.02

$
0
0

Hi,

 

I can confirm that everything is working as expected. I did some restore tests. It was not needed to add the firewall rule.

I really appreciate your detailed insight on this.

I don't know if i can mark the topic as completed since i'm not the initiator.

 

BR,

Mihai.


Re: Advanced GRE plugin Vcenter 5.5.0 U2 Data Protector 9.02

$
0
0

Please assign a Kudo (thumbs up) If this works for you. :)

Regards,
Sebastian Koehler

Re: DP support for XenServer

$
0
0

Hi Carlos,

the recommendation is to use backup agents inside the VMs. If you use the Disk Agent + AutoDR (on supported Windows/Linux versions) you're able to restore single items or restore the whole operating system. The XenServer scripts always had issues with the requirement for offline backups to backup the VM configuration files.

Please use the Accept Solution button next to my post and assign a Kudo (thumbs up) If this works for you.

Regards,
Sebastian Koehler

Re: [Critical] None of the Disk Agents completed successfully. Session has failed.

$
0
0

i will try the steps and let you know the status as soon as possible.

regards

Narendran

(DP) Support Tip: Using a Single Store to serve multiple B2D devices is not supported.

$
0
0

Durning my time supporting the StoreOnce Technology both Hardware and Software, I have seen
several environments where there has been multiple B2D devices configured (libraries) which point to the same store.

Doing so will create a multitude of Media Item Management issues. Configuring multiple B2D devices that point to the same store is not supported.  Please see the following statement from the Deduplication User Guide:

 
It is not supported for more than one B2D device to access the same store. This means that each B2D device must be configured to a dedicated store.  Do not configure a second device to use the same store.

Re: HP DP vulnerability

$
0
0

barmitzvah.jpg

Hello Sebastian,

 

this is the message from nessus. I install the last version of DP 9.06 from May with the last Paches for Windows.

 

 

Re: HP DP vulnerability

$
0
0

I'm not that familiar with Nessus. Do you see what output Nessus finds on your client? With 9.06 on Windows it should be something similar to the string I get here when connecting to port 5555/TCP using telnet.

HPE Data Protector A.09.07: INET, internal build 109, built on Freitag, 1. Juli 2016, 23:03

Regards,
Sebastian Koehler

fnames.dat reduce file

$
0
0

fnames.dat my file is 100%, as not to create another, when I try according to the manual get the error above Tablespace size limit allowed. 12: 1290.

help...

Dataprotector 7.0


(DP) Support Tip: DP 9.x - IDB "bloat" error / pg_reorg tool info

$
0
0

(DP) Support Tip: DP 9.x - IDB "bloat" error / pg_reorg tool info

Issue :

- Upgraded to Data protector V9.06 from 7.
- Enabled alerts for the IDB

and now, they are alarmed on a lot of these entries from Ob2EventLog.txt

Warning 06/23/2016 11:07 OMNITRIG DbReorgNeeded "[138:762] Fragmentation of the table dp_catalog_object_datastream in the column objver_seq_id detected."

Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_object_version_data_protection_index detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_object_version_object_access_index detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_objver_backup_name_idx detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_objver_pit_ux_idx detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_objver_restore_graph_idx detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_objver_object_idx detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_object_uk detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_object_name_idx detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_position_medium_idx detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_position_objver_idx detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_position_seqacc_med_detail_catalog_exists_index detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index position_pk detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index datastream_pk detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_objdatstr_btag_idx detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index objver_session_pk detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:763] Fragmentation of the index dp_catalog_objversession_session_idx detected."

Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:724] Bloat of the table dp_catalog_object detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:724] Bloat of the table dp_catalog_object_datastream detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:724] Bloat of the table dp_catalog_object_versession detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:724] Bloat of the table dp_catalog_object_version detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:724] Bloat of the table dp_catalog_object_version_data_protection_index detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:724] Bloat of the table dp_catalog_position_seqacc_med detected."
Warning 06/27/2016 12:30 OMNITRIG DbReorgNeeded "[138:724] Bloat of the table dp_catalog_position_seqacc_med_detail_catalog_exists_index detected."


From the Trouble.txt file :

MESSAGE:
[138:724] Bloat of the table p detected.

DESCRIPTION:
Data Protector Internal Database tables should never grow ("bloat") due to unreclaimed dead rows.
According to bloat threshold, hp recommends to reorganize the table.

ACTION:
To reorganize a table, use the pg_reorg tool.

=================================================================================================

MESSAGE:
[138:762] Fragmentation of the table p in the column p detected.

DESCRIPTION:
According to table fragmentation threshold, hp recommends to reorganize the table on the stated column.

ACTION:
Use the pg_reorg tool to reorganize the stated table.

=================================================================================================

MESSAGE:
[138:763] Fragmentation of the index p detected.

DESCRIPTION:
According to index fragmentation threshold, hp recommends to reorganize the index.

ACTION:
Run the following Data Protector command to reorganize an index:

* omnidbutil -reindex -index <IndexName>

=================================================================================================

 

RECOMMENDATION - Check the Postgres logs and if it don't show anything to be worried about, do nothing and ignore these reorg "warnings".

Run IDB check (omnidbcheck -extended) and IDB backup to ensure IDB is good.


Postgres typically doesn't need any special "reorg" or "bloat" removal steps because it has autovacuum doing it continuously where it matters.

The tables and indices get "bloated" over time but that doesn't mean it's worth "reorg-ing" them.

The -chkreorg code looks at correlation between the order of column values and their physical order and thinks near-zero correlation means "fragmentation". It's impossible to have all columns highly correlated to the physical order on disk unless the columns are highly correlated between each other - which would be nice but it's very unlikely in a real-world database (i.e. physically reordering the table by one column order would disorder the table for other columns etc). High correlation is only useful for queries returning large result sets where we'd want sequential table scan to result in a nearly sequential disk read to maximize I/O throughput. However this is not the typical query scenario in DP - the typical query scenario is an index scan returning a small result set.

No need to do anything because whatever actually is worth doing, is already being done automatically via autovacuum.

Re: [Critical] None of the Disk Agents completed successfully. Session has failed.

Re: [Critical] None of the Disk Agents completed successfully. Session has failed.

$
0
0

Please remove the packages before installing again.

Sebastian.Koehler wrote:

rpm -qa OB2*
rpm -e OB2-DB2-A-06.11-1 rpm -e OB2-INTEG-A-06.11-1 rpm -e OB2-MA-A.06.11-1 rpm -e OB2-DA-A.06.11-1 rpm -e OB2-CC-A-06.11-1 rpm -e OB2-CORE-A.06.20-1
rpm -qa OB2*

Re: [Critical] None of the Disk Agents completed successfully. Session has failed.

$
0
0

sir i uninstalled the package and installed it again, now it installed successfully and also i added the miprd client successfully without any error.hp_dataprotector_error7.JPGhp_dataprotector_error6.JPG

Re: fnames.dat reduce file

$
0
0

You need to add an addtional tablespace extend before you can consider purging the IDB and perform a readdb/writedb operation.

When this is done you should consider upgrading to Data Protector A.09.07 since this will automate all kind of purge and IDB expansion operations. 

Regards,
Sebastian Koehler

Re: [Critical] None of the Disk Agents completed successfully. Session has failed.

$
0
0

Good this is working now. You told me this is a SAP R3 backup. But there is no OB2-SAP package installed on the client. Are you able to add this component through the GUI to the client and try again? Can you share a screenshot of the backup spec that is failing.

Regards,
Sebastian Koehler

Re: [Critical] None of the Disk Agents completed successfully. Session has failed.

$
0
0

hp_dataprotector_error8.JPG

AFTER adding the client also the backup is failing still.  below is the output of the failed session 

[Normal] From: BSM@icfbackup "MIP-FULL" Time: 27-07-2016 14:45:05
OB2BAR application on "miprd" successfully started.

[Normal] From: BSM@icfbackup "MIP-FULL" Time: 27-07-2016 14:45:06
OB2BAR application on "miprd" disconnected.

[Critical] From: BSM@icfbackup "MIP-FULL" Time: 27-07-2016 14:45:06
None of the Disk Agents completed successfully.
Session has failed.

[Normal] From: BSM@icfbackup "MIP-FULL" Time: 27-07-2016 14:45:43

Backup Statistics:

Session Queuing Time (hours) 0.00
-------------------------------------------
Completed Disk Agents ........ 0
Failed Disk Agents ........... 0
Aborted Disk Agents .......... 0
-------------------------------------------
Disk Agents Total ........... 0
===========================================
Completed Media Agents ....... 0
Failed Media Agents .......... 0
Aborted Media Agents ......... 0
-------------------------------------------
Media Agents Total .......... 0
===========================================
Mbytes Total ................. 0 MB
Used Media Total ............. 0
Disk Agent Errors Total ...... 0

 


Re: fnames.dat reduce file

$
0
0

Hello

As Sebastian mentioned the first option must be upgrade to supported version, if this is not possible for any reason you need increase values for some global options

Please set all of  the following “global” parameters to 8.
DbDirsDatLimit=8
Data Protector A.06.20
This value was automatically copied from previous version.
DbFn1ExtLimit=8
Data Protector A.06.20
This value was automatically copied from previous version.
DbFn2ExtLimit=8
Data Protector A.06.20
This value was automatically copied from previous version.
DbFn3ExtLimit=8
Data Protector A.06.20
This value was automatically copied from previous version.
DbFn4ExtLimit=8
Data Protector A.06.20
This value was automatically copied from previous version
The above resolved the issue

Aftert that try to create new fnames.dat extension.

Best Regards

Data Protector Mount request: medium label always the same

$
0
0

Hi everyone,

I have an issue with my Data Protector, everytime i get a mount request on the tape label I always get the same tape label, how can I change it.

Thanks,

YR

Re: HP DP SSLv2 vulnerability

$
0
0

Hi Sebastian,

thank you very much for your answer.

I'd like to uninstall (disable) StoreOnce Software module because we don't use it (I hope). How to check, if we use it or not and how to disable/uninstall it?

Thank you

Regards

 

Re: HP DP SSLv2 vulnerability

$
0
0

You should run /opt/omni/lbin/StoreOnceSoftware --list_stores on the server to see if stores are defined and if they contain data. If this is the case you can check /opt/omni/bin/omnidownload -list_libraries -detail and look for any Backup to Disk library that points to your server (see DIRECTORY statement in the output).

Regards,
Sebastian Koehler

Re: HP DP SSLv2 vulnerability

$
0
0

You can remove StoreOnceSoftware using this procedure.

[root@linux ~]# rpm -qa | grep OB2-SODA
OB2-SODA-A.09.00-1.x86_64
[root@linux ~]# rpm -e OB2-SODA-A.09.00-1.x86_64
Shutting down StoreOnceSoftware...SUCCESS

Regards,
Sebastian Koehler

Viewing all 10494 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>