View Single Post
  #24  
Old February 24th 18, 08:52 PM posted to rec.photo.digital,alt.windows7.general
Paul[_10_]
external usenet poster
 
Posts: 64
Default A simple way to transfer photos from your phone to Windows withoutinstalling anything on either

ultred ragnusen wrote:
Paul wrote:

Once I burn the ISO to a disk will it be 'bootable' or will additional
action be required first?

It requires dancing a jig on one foot.


The Tier 2 Microsoft support person at +1-800-642-7676 took control of
another Windows 10 Pro system to download, burn, test, and run the same
sequence of repair that we ran (and failed at) using the bricked Windows 10
Pro recovery console.
http://wetakepic.com/images/2018/02/...dvd_repair.jpg

For the data, Knoppix worked just fine, but I am getting a very common
error from Knoppix on files that shouldn't have that error, where, when I
google for the error, NONE of the common causes can possibly be why I'm
getting that error.
Error splicing file: Value too large for defined data type.
http://wetakepic.com/images/2018/02/...x_error_01.jpg


One series of threads I could find, blamed the cause on

Ubuntu is just not building gcc with -D_FILE_OFFSET_BITS=64

which causes 64-bit routines for file parameters to be use automatically.
You can declare such things discretely when programming, or
taking a legacy program and passing -D_FILE_OFFSET_BITS=64
helps in an attempt to fix them automatically.

*******

https://bugs.launchpad.net/ubuntu/+s...ts/+bug/455122

The inode number in the example, is huge.

# on cifs mount...
19656 open("grape.c", O_RDONLY|O_NOCTTY) = 3
19656 fstat64(3, {st_dev=makedev(0, 23), st_ino=145241087983005616, === not a normal inode
st_mode=S_IFREG|0755, st_nlink=1, st_uid=3872,
st_gid=1000, st_blksize=16384, st_blocks=1, st_size=25,
st_atime=2009/10/18-19:13:16, st_mtime=2009/10/18-19:00:51,
st_ctime=2009/10/18-22:31:53}) = 0
19656 close(3) = 0

If we convert that number to hex, it's 0x020400000004AFB0.
It's remotely possible the inode number is actually 4AFB0
and the upper portion is "noise" from an uninitialized
stack parameter or memory location.

That's probably not the only root cause, but I wanted
to at least see an example of what they might be
complaining about.

In Linux, when NTFS is mounted, stat() results are faked
to make Linux "comfortable" with the IFS being mounted.
The Linux inode number, is actually formulated using
the #filenum of a file from $MFT. So the parameter in
fact, has a traceable origin. If you saw the errant
inode number in that case, you might be able to look up
in the $MFT, and see a "match" for the lower portion
of the number (the 4AFB0 part).

Since you say you're staying "on-platform" and not using
SAMBA/CIFS for this transfer, the result is highly
unusual. I've never seen this error in all the times
I've tried things with various Linux distros. I might
even be convinced to run a memory test as my first step
(memtest86+).

After the memtest completed one pass successfully,
I would change distros. And move on.

*******

The other possibility, is the source disk is damaged
somehow. But the way Windows handles filenum, it doesn't
allow the number to grow and grow. When you delete a file,
the "slot" is available for the next file creation. This
helps to keep the "epoch" of filenum values low. While
the filenum field is likely to be a large one (to suit
the declared maximum number of files that NTFS supports
in Wikipedia), users probably never see filenum
values remotely approaching the max_value.

On my Win10 C: drive with the build trees in the user folder,
the stats (as collected by the old Win2K era utility nfi.exe) are

Number of files: 1318185

Last file:

File 1341804

\Windows\servicing\Version\10.0.16299.245

So the highest #filenum (1341804) is not even remotely close
to being a 64 bit number in magnitude. And I don't even know
if a corruption on the source side could be interpreted that
way.

Paul