Am new to HP DP Backup tool. could you please help, how to install a CLI console on my labtop.
As i have instatlled GUI and am on it,connecting CM's well. but want to run some cmmd on CLI.
Thanks in Advance
Am new to HP DP Backup tool. could you please help, how to install a CLI console on my labtop.
As i have instatlled GUI and am on it,connecting CM's well. but want to run some cmmd on CLI.
Thanks in Advance
Hi,
The CLI and the GUI are both included in the Cell Console (CC) component. Once that component is pushed to the client, both the GUI and CLI are available on that client.
Koen
Hi Randy.
It sounds like using 'Changed Block Tracking' will fix your problem. With newer DP releases, this is the default choice but I don't know which release you are using. I would recommend reading the Integrations' guide's VMware section related to 'Changed Block Tracking' to understand how it works, and how it can be used.
Regards,
Shishir
With DP 9.04, disabling CBT is no longer an option; you have to use it.
When I ran my first VMware backup, I got the CBT errors because the VMware guys had never used it. I worked with them so that now, when a new VMware client needs to be backed up, they run some script against the client to enable CBT before I add the client to backups.
But you brought up a good point. I was comparing initial backups on both applications. Let me run another full backup to see if DP numbers come down.
**************************************
I was thinking maybe the 2nd backup would be smaller by just picking up changes but it was the same size.
If IDB Backup (here 5 streams to 1 gateway) to storeonce device shows following error
-->
[Major] From: BMA@bma-client.domain "D2D02_IDB_dpcs2_gw1 [GW 1060:0:10030647198765541745]" Time: 18.09.2015 12:47:23
[90:51] \\d2d02.xxx.uni.de\Store_IDB_dpcs2\9f65cb95_55fbeb63_0d6c_0001
Cannot write to device (JSONizer error: Invalid inputs)
[Major] From: BSM@cs2.domain "system-omni2" Time: 18.09.2015 12:47:59
[61:3003] Lost connection to BMA named "D2D02_IDB_dpcs2_gw1 [GW 1060:0:10030647198765541745]"
on host bma-client.domain.
Ipc subsystem reports: "IPC Read Error
System error: [10054] Verbindung wurde von Peer zurückgesetzt
"
[Critical] From: BSM@cs2.domain "system-omni2" Time: 18.09.2015 12:47:59
[61:12019] Mismatch in backup group device and application database concurrency configuration.
Application database concurrency is 5.
situation:
DP 8.13 build 207 (MA DPWIN_00817) 1 gateway, #streams = 5 --> error above
DP 8.13 build 207 (MA DPWIN_00817) 1 gateway, #streams = 1 --> works fine
The following should be considered:
Concurrent streams to a D2D device are not supported as per Deduplication whitepaper (Deduplication.pdf).
That is because concurrency lowers the deduplication ratios on D2D devices (if you have interleaved data,
it can make different combinations of the same files, meaning that same data can be written multiple times).
And there's also nothing to gain; concurrency is mostly for tape drives, so that they don't have to throttle -
the tape can run without having to stop for disk agent to send data (braking a tape and then starting it again takes some time).
Since D2D devices are disk based, they do not benefit from that mechanism.
JSONizer was never made for concurrency, since it was made with that limitation in mind.
That's also why it reports errors, because it can't read interleaved data.
White Paper HP Data Protector 9.00 Deduplication
-->
To optimize deduplication performance, Disk Agent concurrency is not supported
(this means, one Disk Agent talks to one Media Agent – there is no multiplexing of streams).
Solution
So a better configuration would be: Each DA writes to a separate MA
More pieces of the pie. I had an HP tech ask these questions and wanted to provide the answers here. We are using thick provisioning as opposed to thin provisioning. And we are using eager zeros instead of lazy zeros.
Am not getting the both [GUI and CLI ]on Componrnts selection list,please assist.
PF Attachment.
Hi,
the issue could still use a better error reporting. Apparently sending concurrent streams to an SO B2D device fails in intricate ways when the data is being ingested, and the resulting errors produce all kinds of other fallout like hanging/freezing sessions. Instead of triggering low level JSONizer data faults, what about preventing concurrent data streams from ever reaching it? Or in case they do anyway, produce a readable message that pinpoints the problem? I had the luck to find an internal memo in the support portal explaining that JSONizer errors may just be caused by inadvertantly concurrent streams, but had I not already developed this suspicion, I'd never found that gem. This is clearly ineffective.
Please also note that there is not always a way to avoid running into this bug as of 9.04. I tried, and I could avoid it for file system objects. But try the following:
So there are cases where the Admin has no control over the multiplexing and will run into the issue even though the configuration is correct. That's clearly a bug, so let's hope it gets fixed one day - but how many more of this kind may lurk somewhere?
HTH,
Andre.
Hi,
while monitoring my SOS housekeeping activity on 9.04 (running on W2k8R2), I noticed that, starting after a while of normal operations, HK suddenly doesn't free disk space anymore. Instead, during HK, the disk usage grows by significant amounts (hundreds of GB). At the same time, StoreOnceSoftware --list_stores tells me it freed some 13GB (Store Data Size is dropping from 4121GB to 4108GB). I first tried to explain the issue away by metadata increase, but observations made after that tell a different story:
Anyone else seeing this? Workarounds? Is there a known fix? Maybe even in 9.05?
TIA,
Andre.
Anyone out there encounter the same problem with VM backup using DP 9.04? We were initially on DP 9.03 and this has been working fine until we update to DP 9.04.
/var
Directory is a mount point to a different filesystem.
Backed up as empty directory without extended attributes and ACLs.
All inputs are Welcome :)
Hi,
that's a Warning, not an Error. It's there to inform you what actually happened. If you don't wan't to see these warnings any longer, consult OB2NOREMOTEWARNINGS in your respective DA's .omnirc. But suppressing them doesn't change the fact: When you back up some Unix file system object housing any mount points, these will be entered into the target object as empty directories and may miss their original attributes (because these are invisible due to the root directory attributes of the file system mounted there taking over). The warning is there to make you aware that - should you somehow manage to not back up the mounted file system individually to another target object - your backup will miss it. So check wether you have objects for these clients specifying a source of /var in addition to those sourced by / and if so, all is well. Otherwise, fix your Trees and/or Excludes.
BTW, at least on Linux, you can easily get the true permissions and other attributes of the mount point directory by simply singling out the containing file system in a bind mount. DP's DA makes no use of this, though, and it wouldn't be portable anyway.
HTH;
Andre.
This is why i'm not using SOS anymore, it never work longer than a few month.
Hi.
DP 9.04 makes CBT mandatory for VMware backups. Unfortunately for you, CBT doesn't work in the presence of user snapshots. You'll need to eliminate user snapshots for your backups to work.
Regards,
Shishir
I have a client (linux) zoned to tape drives over SAN for SAN bakcup.
I recently upgraded tape drives to latest firmware, but on client when I run cat /proc/scsi/scsi, it still shows old firmware of drives. Can anybody help me to know how to scan or see latest firmware.
I dont want to reboot the client.
Hello together,
our HP DP 9.03 request this tape
Mount request for medium:
MediumId : fe81fde5:53847f36:0550:0002
Label : 503331L5
Location :
Device : xxx
Host : xxx
Slot : 27
in slot 27 it is a tabe with the label 503331L5 but with the medium id d41ea8c0:562a10bc:400c:0007
Now i want set the protection of MediumId : fe81fde5:53847f36:0550:0002 to permanent.
Or can you tell me to thay HPDP to use an other medium if one medium will not be found, because there are 40 Tapes in freepool, so enouth to use.
Thanks
and Sorry if this issue is post in an other threat, but on every serarc i have found about 50 pages of results....
If you are configuring the environment described in the title:
"Non-staged VMware GRE smartcache devices based on Storeonce NFS/SMB network storage."
And in the middle of your backups to these devices, you experience disconnections or poor performance. Please check this Customer notice:
http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c03924510&sp4ts.oid=5196525
There were some problems both in CIFS and NFS causing disconnections and poor performance that should have been resolved with the latest Storeonce firmware:
CIFS shares periodically losing connection
CIFS share failed due to Cannot write to device [64] The specified network name is no longer available
Unable to manage CIFS share permissions on appliances that are members of an Active Directory domain
Poor performance of backups to CIFS shares with Veeam software
NFS shares fail to start
NFS performance improvements
NFS Share created as write-protected cannot be changed to non-write protected
If you have already installed the latest firmware and the issue persists, please contact StoreOnce support to ensure they provide you with the right fixes for these known issues.
Regards
Juanjo
This is probably a question to be addressed to a Linux forum.
Anyway, have you tried running rescan-scsi-bus.sh script ?
You can install it with by 'yum install sg3_utils'.
Regards
Juanjo
It seems that some barcode labels have been reused on new media without deleting the previous media from the database.
This could lead to data corruption in the DP Internal Database, and it's not recommended.
My suggestion would be to export one of the two tapes from DP, then place a different label on it and import it back to DP.
Hopefully this should resolve this problem.
Regards
Juanjo