My server crashed, and now I cannot flush my dirty files. Why not?
If your server crashes while it was in de process of flushing data, the files that were being flushed will have the F flag (flushing) set. Under normal circumstances this flag will be cleared by the flushing process after the flush has finished. In case of a crash these flags will not be cleared automatically and therefore you will need to clear the flags manually. Use the xfs_correct and xfs_check programs to clear the flags.
I am getting "Too many open files" errors when flushing on DAXfs on Solaris. What can I do?
If you have a huge file cache with a lot of dirty files and want to flush that in one single flush action, you might hit this problem. This is a result of the default maximum open file handle limit in Solaris 2.4+.You can change this by adding the following lines at the bottom of /etc/system: set rlim_fd_cur=1024
set rlim_fd_max=4096 The rlim_fd_cur variable is by default 64 and is the soft limit. It is used by all shells and all processes started. The hard limit is rlim_fd_max and is set to 1024 by default. In the shell, the open fd limit can be increased upto this maximum value. By adding the lines above in the /etc/system file, the system-wide soft and hard limit are increased to 1024 and 4096 respectively. Sun has info-docs about this subject. These are: #1264, #11112 and #1001.
I am getting an error with error code 66110088 but without message. Now I cannot flush. What can I do? I am getting an error "Copy already in progress" when attempting to flush. What can I do?
This is caused by a problem in the rjbcd daemon on the smartDAX unit. In very odd circumstances it fails to clear a flush in progress flag and therefore the DAXfs software seems to think that another flush is already in progress while a new flush is started.There is a workaround for this problem. Simply restarting the rjbcd daemon on the smartDAX system will resolve the problem. This can be done by logging in to the system and kill and restart the rjbcd daemon:smartdax# psg rjbcd
root 1438 1 0 14:51:15 ? 0:00 /opt/DAXjbc/bin/rjbcdsmartdax# kill 1438smartdax# /etc/rc2.d/S91jbc start
I am getting "UDF File System error, Error creating directory, Lookup failure for character>" when flushing to DVD-RAM/R. What is wrong?" when flushing to DVD-RAM/R. What is wrong?
Currently DAXfs only supports ASCII characters in filenames. The file you are trying to flush appears to have an extended ASCII or Unicode character in the filename.Rename the file so that only ASCII characters are used before copying the file into the XFS or flushing it to DVD media.
After exporting duplicate disks I can not reuse the slots. Whats wrong?
Make sure you use the appropriate procedure to export the duplicate disk. When exporting the duplicate disk from the slot view in DAXplorer, the system will ask youif you want to remove it as a duplicate disk.If you use the command line tool xfs_remove make sure to add the switch -d:daxserver# xfs_remove -d archive 1.
Archives for specific workflows: Email Archive, Prepress Archive, Medical Archive, Broadcast Archive and File & Folder Archive. Read more>>>
Discover the simplicity of archiving & back-up with the affordable DAX On-Line Archive. Read more>>>
De Schutter ´Neroc archives automatically all email and data related to their prepress workflow. Read more>>>
CHUM Hospital avoids time-consuming process of rewinding back-up tapes. Read more>>>
Massive Blu-ray Disc storage capacity on random and permanently accessible archives. Read more>>>