Quantcast
Channel: All Data Protector Practitioners Forum posts
Viewing all 10494 articles
Browse latest View live

Storeonce error: storeOnce device offline, network error ....

$
0
0

We have a cell manager running version 9 and it backs up to a disk subsystem on a different server.

A few days back the jobs failed. The error message is

 

Got error: "StoreOnce error: StoreOnce device offline, network error occurred or secure communication failed while contacting the StoreOnce device" when contacting  B2D device!

 

 

It also mentions that the gateway is disabled.

However, the gateway is not disabled. When you "check" the gateway by going to the properties of the device, it throws you the same error instead of "OK".

When I tried to create a new device, none of the stores populated in the select stores section of device creation. I did verify that the stores were started and ON in the disk subsystem.

Any help would be greatly appreciated!


Re: How to have DP GUI show the actual name of the virtual server?

$
0
0

Hi we're on 9.04 and I know that I can see name and path if I drill down in objects under the vcenter node and the unique identifier but I would want to get rid of the unique identifier and have the server name at this level. We have over 400 servers being backuped under one vcenter and it's terrible when you want to see details of one server.

 

Also I want the servername appear in the GUI during the backup instead of the unique identifier which francly is of no good at all. 

Have you seen any enhancemnt request about this?

Data Protector 9.0 and StoreOnce

$
0
0

I have a couple questions about DP 9.0 and StoreOnce software.  I am backing up the data to one of my Dell SAN's.  I have been using it for a few months and feel that I am not useing it properly.  I have DP 9.0 installed on a Windows 2012R2 server.  I am backing up data from 9 remote sites and 25+ inhouse servers.

 

Question #1.  After creating the backup job for the Remote sites I did a full backup of each site.  After that I have been doing incremental backups.  Am I doing this right?  I had read that I should be doing full backups all the time but can not find that artical again.  Should the backups be Full or Inc?  I run these backups at 10:00am, 12:00pm, 2:00pm, 4:00pm & 6:00pm.

 

Question #2.  After creating the StoreOnce I was wondering if I could have two different volumes, at two different locations so I can backup my remote sites to the SAN inhouse and the inhouse servers to my DR site.  Is this possible and how do I make it happen?

Question #3.  My Manager & Director have said that they want to get away from tape backups.  It was my understanding that DP 9.0 Data deduplacation would all me to keep my backuped data for ever.  Then I read further in a DP document and read that if I want to keep the backed up data longer than one year then I need to back it up to tape.  Is this correct?

 

Thanks for any help you can give.

Re: DP 9.0x supports for EMC Data Domain DDOS 5.6.0.3

$
0
0

In the device_matrix_support file for DP 9.04 the latest version mentioned is 5.5...

Re: throughput

$
0
0

halcanites wrote:

I am running DP 8.10 on win server 2012 R2.  Is there a way I can see the throughput of the backup job while it is running?  Also, is there a way I can see what file is being backed up while the job is running?


Regarding throughput, it becomes second nature to "feel" the throughput from watching the Monitor GUI if you're "doing DP" for some years. It really escapes me why a display of the instantaneous transfer rate was never implemented, given everybody and their dog repeatedly ask for it. When the feel is somehow wrong, I help myself by watching things like the Task Manager network tab or atop(1) on Linux (to see the incoming data rate on the MA hosts), and Resource Monitor, Perfmon, Process Explorer and again atop(1) to watch source disk saturation, single core CPU saturation and such. To see the file being backed up, if the DA in question runs on Windows (I presumed so from your sparse input) you could start Resource Monitor (from Task manager performance tab), filter yourself on the vbda process in question and then closely watch the "Disk" pane on the Overview tab. On the CPU tab in the "Associated Handles" pane you will get even more info on the filtered process, specifically open files it isn't actively doing I/O with. When you're already here, you may find a reason for the bad throughput - have a close look at Disk "Highest Active Time" (in the Overview and Disk Panes).

 

Whether the throughput you see  is about right, nobody can tell without more data. 12TB could be a dozen ideally unfragmented large files, or it could be 1 billion really small ones in a file system with metadata looking like it took several buck shot. Assuming 100 IOPS per spindle (typical estimate for mediocre disks, its mostly 80..200 from worst to best), a worst case 4KiB random access pattern needs 12000000000000/(4096*100) seconds to read 12TB, that is 340 days! A lots-of-small-files traversal is close to pure random access, so any throughput beyond 6MB/s is actually not that bad and harder to achieve than most people think. You need multiple spindles for that. Your volume K took 20 days to pull 6TB? Thats approx. 3.5MB/s. That requires 800 IOPS at worst case randomness, which would mean some 6 to 8 spindles to achieve even that on spinning rust. Or some less, because the worst case isn't usually what you hit. So it may be completely normal. Watch your source disk saturation.

HTH,

Andre.

Re: Data Protector 9.0 and StoreOnce

$
0
0

Hi Jeffrey,

 

#1 Full Backups only is fine from the PoV of StoreOnce (at least in theory) as dedup will squeeze the redundancies out no matter if the same data is repeatedly transferred to the SO store (Full) or omitted (Incr). But of course, it isnt fine for the sources (load on the storage backends, CPUs etc, especially at the backup rate you use) and in practice, SO too has to chew on the data it has to dedup away or rehydrate. Incremental forever (aka virtual full using consolidation) is dead IMO (it can still be implemented on SO AFAIK, but it's really not performing great and I see no longer any reason for the hassle). So what I do is just use classic staggered cadences of Full, Incr and Differentials (Incr N in DP parlance), exactly as I used them before on tape, not least because I'm going to copy to tape later anyway. For me that works best. I've meanwhile scrapped every DFMF FileLibrary I ever implemented for Consolidation and Incr4ever.

 

#2 With SO Software, you can only have one backend filesystem per machine where SO runs. So in order to achieve what you want, you may put another machine in your DR site, install SO Software and an MA on it and then direct backups there. Or you may actually buy a StoreOnce Backup Appliance and connect that via Catalyst. For that to truly make sense, though, you ideally need two appliances and not SO Software - AFAIK you can only do thin replication (Copy specs that are executed without rehydration, c.f. federated dedup) between appliances, not between SO Software stores or Software and Appliance stores. This way, you could backup to the local appliance (high bandwidth) and then thin copy to the remote one, using bandwidth effectively. And of course you could do this bidirectionally between DCs. With SO software, there's no way around rehydration and high bandwidth copies (I'd really like to hear I'm wrong here, because I do have both kinds of SO stores and would like to use replication).

 

#3 Tape isn't going to die soon. Of course, you don't backup to tape directly any longer. Just copy to it. The point of tape is duplication and resiliency. With one SO store, you are just ONE malfunction away from losing EVERY backup you ever made. And it happens. I've seen stores go wonky e.g. after a SmartArray controller crash. And just last week enough disks failed in a RAID5 in short enough sequence for a store to just vanish. That doesn't happen with tape, at least when done right (off-DC and off-Site Vaulting, combined). The risk is reduced with two georedundant SO stores like discussed in #2, but I still wouldn't bet the farm on just that. That being said, keep your data on SO as long as you want. As long as you store isn't filling up or getting extremely slow due to the vast amounts of data in it, there's no limits. It's just that it may be gone in a split second, completely. I actually plan to return to GFS style schedules on my SOs for the same reason: They have plenty of free space still, so I can run some fulls in my schedule with extented data protections. Say, default protection 8 weeks, every 4 weeks a full with 24 weeks protection, every 12 weeks one with 2 years protection and a yearly full with permanent or 10 years or so. Of course, that's no longer backup but abusing the free capacity as a poor mans archive - fully knowing it's unreliable (so it doesn't replace an archive for the stuff where you need one for legal reasons). But why should the SO store have 30TB unused physical disk space ;)

 

BTW, I'm currently running a scheme where my backups go to two SO stores mirrored (one SO software, one SO Catalyst appliance at another location), then two post-backup copies stage them to tape. One reads from the SOS store and writes to local LTO6 drives, the other reads from the remote SOC appliance and writes to a bunch of LTO3 tapes at yet another location (I had to find some use for my older MSLs and LTO3 media pool). The mirroring on backup and the parallel staging to a total of 5 tape drives have shrunk my backup windows more than I initially expected. Of course it's all new, we'll see how it copes in 7 years ;)

 

HTH,

Andre.

Confusion between Recycle / Data Protection period

$
0
0

Hi ,

We have Cell manager Win 2008 with 10 aix clients and HP Storeonce in our setup.

We want to setup a policy such that after 7 days , data will be wiped out and we can take fresh backup on same Disk without manually recycling/formatting  it. ( It must be automated )

 

We are confused as of how do we achive this ? 

Do we need to look out for Data protection period ? what if its set to 7 days will data on it automatically overwrite after 7 days ??

Re: How to have DP GUI show the actual name of the virtual server?

$
0
0

Hello

 

During backup session we show the VM name, its path and iUUID. If you want to see any other data please open a support case requesting what do you want and new ER will be opened.

 

Best Regards


Re: Confusion between Recycle / Data Protection period

$
0
0

Hello

 

You have two different protection data--------> Protection of data for IDB

 

Option: Catalog protection

Catalog protection determines how long the information about the backed up data is kept in the IDB. If there is no catalog protection, you can still restore your data, but you cannot browse for it in the Data Protector GUI.

None: provides no protection.

Until: means that the information in the IDB cannot be overwritten until the specified date. Protection for the information stops at noon on the chosen day.

Days: means that the information in the IDB cannot be overwritten for the specified number of days.

Weeks: means that the information in the IDB cannot be overwritten for the specified number of weeks.

Same as data protection: means that the information about backed up data in the IDB is protected as long as the data is protected

 

 

Option: Protection--------TAPE PROTECTION

This option enables you to set periods of protection for the data you back up to prevent the data from being overwritten. The default value is Permanent. Others are:

None: provides no protection. Media will be removed/deleted before the next write operation/backup to the file library is started

 

Best Regards

Re: Confusion between Recycle / Data Protection period

$
0
0

Hi,

 

My question is related to data on tape protection.

In options we have Retention period days,weeks,until ,etc.

If we set it to 7 days , and take backup on media of size say 400GB , backup data is of 350GB only.

After 7 days will it be automatically recycled and overwritten with new data ?

Re: Confusion between Recycle / Data Protection period

$
0
0

Hi

 

Yes ,so tape will not be overwritten until last MB has been expired. It means that if you have 399GB of data expired and 1 GB protected tape will not be overwriteen. 

 

Best Regards

Re: How to have DP GUI show the actual name of the virtual server?

$
0
0

Think we're talking about different things:

I want to have the VM name shown in the upper window when monitoring a backup job see attached file dp-temp. This where you can view status of the ongoing backup.

I know that I can see the name and path when DP adds the VM to the backup see attached file dp-temp2.

 

Is this the way that you're reffering to when you say that you can see the name?

 

Also I can't understand why HP shows the UUID as the first level of identifier under vcenter in objects, how do you control the status and troubleshoot when you have to drill down every single UUID to find the right server see attached dp-test3. The first level should represent the VM name, DataProtector is the only backupapplication iI work with that doesn't show me the virtual server name as identifer in Database GUI.

 

And again when the backup is done in the backup log DP agian shows the VM as the UUID and not the name how am I suppose to know what server is done, see attached file dp-test4?

Re: Confusion between Recycle / Data Protection period

$
0
0

Hi,

 

So it means it has no relevance of space left on Tape/disk.

After 7 days of retention period it will automatically recycle tape and can be reused.

Re: How to have DP GUI show the actual name of the virtual server?

$
0
0

Hello

 

Yes you are taliking about show names into messages session, please open new support case for create new ER.

 

Best Regards

New omnidbutil command options not visible after upgrading DP

$
0
0

Hello,

 

I upgraded our DP cell server to DP 7.03 IB 107 as requested in this article:

https://h20564.www2.hpe.com/hpsc/doc/public/display?docId=emr_na-c04636829&lang=en-uk&cc=uk

 

I want to use the new omnidbutil -set_idb_password command as described here:

http://h20565.www2.hpe.com/hpsc/doc/public/display?sp4ts.oid=326828&docId=emr_na-c04702422&lang=en-uk&cc=uk&docLocale=en_US

 

However the new command option is not visible for me when I run omnidbutil help and running it takes me back to the help page. It looks like the new option was not enabled.

 

Cell server - Win 2008R2

Patches are visible:

Patch level Patch description
===========================================
DPWIN_00790 Core Component
DPWIN_00791 Cell Manager Component
DPWIN_00792 Disk Agent
DPWIN_00793 Media Agent
DPWIN_00794 User Interface
DPWIN_00645(BDL703) English Documentation (Guides, Help)
Number of patches found: 6.

 

Any suggestion is welcome. 

Not all clients are patched yet.

 

Thanks!


Re: DP7.00 HP Data Protector software object consolidation - Best Practices

$
0
0

From what I understand in this thread

Replication via DP is only possible between hardware StoreOnce appliances (or StoreOnce VSA).

Software StoreOnce Stores cannot replicate - any object copy from a Software StoreOnce store will result in re-hydration of the files as they are copied to another StoreOnce (either S/W or H/W)?

This was the case in DP 7.01. Is this still the same in DP 9.04?

Or is there a way to get deduplicated data on a Software StoreOnce Store to copy to a Hardware based StoreOnce (in a central DataCentre so via WAN) without rehydration by using Data Protector?

 

Thanks,

 

Chris

 

Re: DP7.00 HP Data Protector software object consolidation - Best Practices

$
0
0

Hello,

 

Indeed the replication is only possible with StoreOnce Backup Systems (either hardware with Catalyst license or VSA which include the catalyst license)

Again, you have understand, any object copy perform from/to a storeonce software store will result of data being rehydrated.

It is still the case in 9.04

You can't transfer without rehydrating from SOS (StoreOnce Software) to a remote Store but you can transmit the data deduplicated (but this is not replication as the data will be rehydrated, then deduplicated again).

To achieve that, you must create a server side gateway located on your remote site for your central site catalyst store.

It works great for small amount of datas but can be time consuming for large amount of datas.

 

 

Re: New omnidbutil command options not visible after upgrading DP

$
0
0

what's the exact synthax you are using and omnidbutil version (omnidbutil -version) ? 

 

 

Re: throughput

$
0
0

Andre,

 

The problem with this is that I have multiple backup jobs, ten right now, currently running.  This long running job started way back in Oct 2. I also have backup jobs running from Oct 24(5), and Oct 26(4).

How configure Storage and Shelves in HP D2D 4112?

$
0
0

Hi folks,

 

We have a device HP D2D 4112 and restart to factory settings.

 

After restart to factorey settings, we lost the storage configurarion and we can't found where configure it.

 

What are the steps to set up the storage?

 

 

Thank you.

Best Regards,

Marcos França. 

Viewing all 10494 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>