Jump to content

Can I build disc image with keeping timestamp precision in milliseconds?


yllLiBe

Recommended Posts

Hello.

I'm an ImgBurn user from Japan.

Thanks for the great software.

 

Can I build disc image with keeping timestamp precision in milliseconds?

Or is there any reason why ImgBurn truncate timestamp in the second time scale?

 

I confirmed following.

  1. NTFS has 100 nanosecond precision of timestamp. ( http://msdn.microsoft.com/en-us/library/ms724290%28VS.85%29.aspx )
  2. I can get milliseconds of timestamp by FindFirstFile() function and FileTimeToSystemTime() function.
  3. At Cygwin Bash shell, `ls --full-time' displays nanoseconds of timestamp (ofcourse it doesn't have full resolution.)
  4. timestamp structure of UDF 1.02 has Microseconds field. ( http://www.osta.org/specs/pdf/udf102.pdf )
  5. Viewing a disc image file built by ImgBurn with binary editor, Centiseconds, HundredsofMicroseconds and Microseconds field seem to be zero.

 

I tried to burn a disc which contains image files and thumbnail files created by certain image viewer.

But image files in the disc has a little different timestamp from original files, so the image viewer recreates thumbnails.

This is the reason I want to store timestamp in milliseconds.

 

Sorry for my poor English.

Sincerely yours.

Link to comment
Share on other sites

Can I build disc image with keeping timestamp precision in milliseconds?

 

No.

 

Or is there any reason why ImgBurn truncate timestamp in the second time scale?

 

I was probably too lazy to work out the conversion and didn't think milliseconds were really that big of a deal.

 

From what I can figure out now, the following *should* work... (please correct me if I'm wrong!)

 

TS->Centiseconds = SystemTime.wMilliseconds / 10;
TS->HundredsofMicroseconds = (SystemTime.wMilliseconds - (TS->Centiseconds * 10)) * 10;
TS->Microseconds = 0;

 

So assuming wMilliseconds is something like 551, that makes...

 

Centiseconds = 551 / 10

= 55

 

HundredsofMicroseconds = (551 - (55 * 10)) * 10

= (551 - 550) * 10

= 1 * 10

= 10

 

Microsecond will always be 0 because you can't invent accuracy that doesn't exist in the first place.

Link to comment
Share on other sites

Thanks for your reply.

 

I guessed timestamp truncation may be because of some reasons for UDF specification, but now I understand it was not.

 

From what I can figure out now, the following *should* work... (please correct me if I'm wrong!)

(snip)

 

It seems to be correct, but let me show the way to get a little more precisive timestamp.

 

#include <windows.h>
#include <stdio.h>

int main(int argc, char *argv[])
{
   SYSTEMTIME SystemTime;
   FILETIME CreationTime, LastAccessTime, LastWriteTime;
   int Centiseconds;
   ULARGE_INTEGER uli;
   HANDLE h = CreateFile(L"foo", 0, 0, NULL, CREATE_ALWAYS, 0, NULL);
   GetFileTime(h, &CreationTime, &LastAccessTime, &LastWriteTime);

   FileTimeToSystemTime(&LastWriteTime, &SystemTime);
   printf("A: Centiseconds: %d\n", (Centiseconds = SystemTime.wMilliseconds / 10));
   printf("A: HundredsofMicroseconds: %d\n", (SystemTime.wMilliseconds - (Centiseconds * 10)) * 10);
   printf("A: Microseconds: %d\n", 0);

   uli.LowPart = LastWriteTime.dwLowDateTime;
   uli.HighPart = LastWriteTime.dwHighDateTime;
   printf("B: Centiseconds: %d\n", (uli.QuadPart / 100000) % 100);
   printf("B: HundredsofMicroseconds: %d\n", (uli.QuadPart / 1000) % 100);
   printf("B: Microseconds: %d\n", (uli.QuadPart / 10) % 100);

   CloseHandle(h);
   return 0;
}

 

The result of above code is here.

 

<command prompt>
C:\Data\Make\Programs\test_nanosecond\Debug>test_nanosecond.exe
A: Centiseconds: 71
A: HundredsofMicroseconds: 80
A: Microseconds: 0
B: Centiseconds: 71
B: HundredsofMicroseconds: 87
B: Microseconds: 50

<cygwin bash shell>
$ ls --full-time foo
-rwx------+ 1 yllLiBe なし(*) 0 2011-01-05 23:17:55.718750000 +0900 foo

(*): means `no group' in English.

 

Case A is the way you've written above.

It uses FileTimeToSystemTime() to get a value of milliseconds.

 

Case B is the way to use the FILETIME structure directly which has 100-nanosecond intervals. [MSDN]

This can get a little more precisive value of timestamp.

FYI, timestamp tick looks like 0.000125 sec (= 125 microseconds) in my environment actually.

I can see the time such as HH:MM:SS.mmm125000 and HH:MM:SS.mmm750000 by `ls --full-time', but I can't see HH:MM:SS.mmm100000 and HH:MM:SS.mmm200000.

 

Could you consider to change timestamp conversion more precise in the next update?

 

Sincerely yours.

Link to comment
Share on other sites

I have to run my times through a few functions before recording them in the UDF file system descriptors.

 

i.e. I take the file time (CreationTime/LastWriteTime/LastAccessTime as a FILETIME) and run it through FileTimeToLocalFileTime followed by FileTimeToSystemTime to end up with a SYSTEMTIME which is then used to fill out the descriptors.

 

None of that is going to cause a problem is it if I then start going back to the original FILETIME for the more precise values?

 

Anyway, I've now allowed for the extra precision (where possible) using your bit of code. Thanks :)

Link to comment
Share on other sites

  • 2 weeks later...
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.