Jump to content

Talk:Year 2038 problem

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Divad27182 (talk | contribs) at 21:27, 23 August 2010 (In Fairness). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconComputing Unassessed
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
???This article has not yet received a rating on the project's importance scale.
Archive
Archives

Other software problems in 2038?

Is it possible that the hypothetical impact of 99942 Apophis may cause some computer software to fail before or in the year 2038? - Zelaron (talk) 05:59, 21 January 2010 (UTC)[reply]

zip?!

I thought zip used dos dates/times and pkwares spec seems to say the same "The date and time are encoded in standard MS-DOS format.", I don't rememeber the details of this format offhand but i don't think its rollover comes in 2038 —Preceding unsigned comment added by Plugwash (talkcontribs)

MSDOS format is a bit-encoding that simply packs the bits holding the year-since-1980, month, day, hour, minute, second (in two second resolution) in to 2x16bits. AFAIK, its rollover date is 2108-01-01.
I've removed the bit about Zip. In fact: The use of 32-bit time_t has also been encoded into file formats, which means it can live on for a long time beyond the life of the machines involved. Are there many examples of this? It appears to me that file formats change much more rapidly than operating systems; isn't this sentence just stating the obvious? squell 20:36, 9 February 2007 (UTC)[reply]

Not true by stephen hawkin

No, in fact you've got it backwards. Data and data formats tend to live much longer than OSes. As to examples, how about Tar (file format) - it's been around for over 30 years, and was standardized almost 20 years ago (in POSIX). The file's last-modification time is stored as a 12-digit octal number in ASCII, so it's a 36-bit number, not 32, but it's still a 1970-01-01 00:00:00 epoch. RossPatterson (talk) 04:15, 18 February 2008 (UTC)[reply]

Some file formats (and filesystem formats which can suffer from similar problems since at the end of the day both just represent ways of structuring information within a big block of storage) stick arround for ages. Some in wide general purpose use (zip, png, gif,jpeg, mp3, fat16........) others only in more specialist uses or even only within one company. It looks like ext2 at least in it's oringinal form stores a fixed size 32 bit timestamp ( http://www.nongnu.org/ext2-doc/ext2.html#I-ATIME) Plugwash (talk) 20:44, 25 January 2008 (UTC)[reply]

Ok strictly in ext2 the fields are defined as u32 but I don't know if 64 bit linux versions consistantly treat them as such. Plugwash (talk) 20:51, 25 January 2008 (UTC)[reply]

I think the question remains: while it's entirely plausible, can somebody provide an example of a file format which encodes a 32-bit time_t?

  • Many filesystems do, but filesystems are almost completely transparent to the user, and that doesn't sound like what the claim was.
  • GZIP (mentioned above) does, but "MTIME = 0 means no time stamp is available", so there's no harm in omitting it
  • PNG uses 7 fields for the timestamp, including 2 bytes for the year, so it's fine
  • GIF and JPEG do not appear to store timestamps
  • MP3 doesn't seem to store timestamps, ID3v1 tags use 4-byte years

I'm sure such a file format exists, but what? —Preceding unsigned comment added by 64.81.170.62 (talk) 19:49, 8 May 2008 (UTC)[reply]

It looks like cpio when generating binary format headers uses a 32 bit unix style timestamp, having trouble finding out if it's supposed to be signed or unsigned though. Plugwash (talk) 23:37, 15 July 2009 (UTC)[reply]

lol

"Using a (signed) 64-bit value introduces a new wraparound date in about 290 billion years, on Sunday, December 4, 292,277,026,596. However, this problem is not widely regarded as a pressing issue." - I think that sentence is the funniest thing I've read on wikipedia. Kudos. (Granted, that says something about the type of prose most articles have, but still.) 165.123.166.240 03:01, 18 February 2007 (UTC)[reply]

that remark really should be removed, as it is wholly unverifiable. Niffweed17, Destroyer of Chickens 02:21, 24 February 2007 (UTC)[reply]
It's one of the very few phrases I've seen on wikipedia that's a harmless attempt at humor. On the other hand, it's been a horrible point of disagreement. Maybe it's not worth it. ~a (usertalkcontribs) 03:39, 24 February 2007 (UTC)[reply]
What exactly is unverifiable about it? —Pengo 07:57, 24 February 2007 (UTC)[reply]
It's impossible to prove a negative. You could say anything isn't a pressing issue, but it doesn't mean it's worth saying. I came here to ask about taking it out, but you convinced me. Superm401 - Talk 09:42, 26 February 2007 (UTC)[reply]
I think we should take it out - even if it is humorous, it's not in an encyclopedic tone. -- stillnotelf is invisible 23:13, 26 February 2007 (UTC)[reply]
I strongly disagree. This is a factual statement that can be verified (at least the date). That some date that is several times the length of time since the Big Bang in the future is not a pressing problem, I suppose this is also analogous to the Y10K problem. If you insist on a compromise here, I would suggest wording somewhat similar to how the Y10K problem is also presented. --Robert Horning 21:14, 8 March 2007 (UTC)[reply]
I have introduced an appropriate source to verify the statement. BeL1EveR 21:57, 28 May 2007 (UTC)[reply]
There were disputes as to the veracity of the claim that the solar system will eventually be destroyed by the sun? – Mipadi 22:11, 28 May 2007 (UTC)[reply]
The link is more a means of putting the timescale into perspective, thus lending credibility to the rest of the sentence.BeL1EveR 22:46, 28 May 2007 (UTC)[reply]
"Credibility to the rest of the sentence?" It's a piece of humor. I don't think there's such a need to treat it so seriously. – Mipadi 01:09, 29 May 2007 (UTC)[reply]

Example Changed?

I could have sworn that the example used to be animated, stepping up the seconds to the fateful moment in question. Has it changed? Have I lost my mind? Jouster  (whisper) 10:07, 11 March 2007 (UTC)[reply]

I found the diff, and reverted the change; I've left a message on User:Ksd5's talk page. Jouster  (whisper) 18:38, 11 March 2007 (UTC)[reply]

It seems like the example doesn't work. The binary representation continuously goes from 1000 ... 0000 to 1000 ... 0111 and then switches to from 1111 ... 000 to 0111 ... 1111 and then back!? The decimal bit is showing positives and negatives, one after the other. —Preceding unsigned comment added by Ywarnier (talkcontribs) 10:13, 29 December 2008 (UTC)[reply]

Solutions

I know this is original research, but isn't a simpler solution to change the epoch date every 30 years? After all when you fill out a check book you only use the last two digits because the first two are already filled in for you. So just change the epoch to 2000, and in thirty years to 2030 and so on. Add a one byte count of what epoch you are using and that gets you 750 years down the road. 199.125.109.11 22:30, 13 June 2007 (UTC)[reply]

I also know this is original research. But, where do you get 750 years from? You seem to be proposing a much more complicated version of a 5 byte number to store dates. A 5 byte number to store dates would come to 2 8*5-1 seconds. That seems like it could be much more than 750 years. But, if you're using a 5 byte number to store dates, you probably would just end up using an 8 byte number to store dates (which is what they are doing). ~a (usertalkcontribs) 22:40, 13 June 2007 (UTC)[reply]
No, that doesn't work. Historical data is the problem -- if you have a file that you edited in 1999, and you move the epoch from 1970 to 2000, then your file's timestamp now says 2029, which brokenly wrong. 64.81.73.35 (talk) 20:37, 9 January 2010 (UTC)[reply]
I don't see how your proposal of adding an "epoch counter" is any simpler than simply expanding the field to 64-bit. Plugwash (talk) 14:54, 13 February 2010 (UTC)[reply]
Agree with above; the problem is historical data. By changing the epoch, previously saved data would be misrepresented. This is worse then Y2K, as this is an OS problem instead of a coding problem. --Gamerk2 (talk) 16:06, 2 March 2010 (UTC)[reply]

Windows Vista

I found this oddly amusing. I changed Vista's clock to 6/28/2038 and restarted my computer. Immediately, my computer alerted me saying that Windows Media Player wouldn't work, and later Java SE binary shut off. Most of my programs seemed to run slower also. Funny thing is, Vista's date can be set up to 2099, which I did not try. Could any of this be tested by someone else, and does it seem to be a good addition to the article (near where an example of Windows XP is given)? Thanks! ---Signed By KoЯnfan71 (User PageMy Talk) 20:25, 28 June 2007 (UTC) [reply]

Alas, it's not appropriate for the article; see WP:OR. That said, if you want to report it to a WP:RS, and they write an article on it, we can then reference that article. Jouster  (whisper) 20:32, 28 June 2007 (UTC)[reply]
Alright. Thanks for saying so. I don't want to go through the trouble of doing anything for this, it's just something I did in spare time that took 10 minutes, so I don't care. ---Signed By KoЯnfan71 (User PageMy Talk) 23:49, 1 July 2007 (UTC)[reply]

Year 2038 Date

Websites I have checked, including this one, give the crucial 2038 date as Tuesday, January 19, 2038, 03:14:07 UTC. The number of seconds from 1 Jan 1970, 12:00:00 AM GMT, is 2,147,483,647, the limit of 31-bit counter. Assuming Julian years, this gives the date above. However, our calendar is Gregorian. Using the average length of the Gregorian year, 365.2425 days, I get the same date, but the time is 15:28:31 UTC. Furthermore, as one website points out, this count ignores the possibility of leap seconds being added.

Don Etz, (e-mail removed) 67.72.98.84 15:40, 12 July 2007 (UTC)[reply]

Unpredictable leap seconds notwithsanding; since neither the Gregorian nor Julian calendar make any mention of the number of seconds in a day - let alone specify it differently - and since both specify exactly the same days in every year between 1970 and 2038 inclusive, I fail to see how the same timestamp interpreted under the two calendars would give times that differ by twelve hours, fourteen minutes, and twenty-four seconds. 202.27.216.35 05:02, 4 November 2007 (UTC) —Preceding unsigned comment added by JackSchmidt (talkcontribs) [reply]
The origin should be given unambiguously, as 1970-01-01 00:00:00 UTC. There can be doubt as to when 12:00:00 AM is. There should be no need to check web sites; the exact calculation is easy enough. There's no problem with 03:14:07; the problem occurs after that second ends. Leap Seconds are ignored in the count, which is better described as GMT or UT.
Javascript new Date(Math.pow(2, 31)*1000).toUTCString() gives the rollover time : Tue, 19 Jan 2038 03:14:08 UTC in IE6, Tue, 19 Jan 2038 03:14:08 GMT in Opera and FireFox.
It may be worth emphasising that those events are on the previous local date in North America.
At the start of the page, "since January 1, 1970" should be "from 1970-01-01 00:00:00 GMT". Eight hours since Jan 1 is breakfast-time on Jan 2.
82.163.24.100 20:14, 1 September 2007 (UTC)[reply]
No original research; if you can find a reliable source that says that that's the time (or if you can convince a reliable source that lists the one we're currently using that they're wrong!), go for it! Jouster  (whisper) 20:38, 12 July 2007 (UTC)[reply]
By the way, that last 0025 as in 365.2425 is not gradually applied, but only occurs as a chunk of 24 hrs once every 400 years, which already occured in 2000 so it will not be used again between now and 2038. The same is true of the 24 in 2425, it only affects centuries, which do not have a leap year (other than once every 400 years). So in computing the time for roll over you need to simply count the number of leap years, 18 (the number of centuries is 1 and the number of years divisible by 400 is 1) and multiply by 24 hours in your computation. I was saying that using one char (8 bits) multiplies 256 times 30 years. I see now that is 7680 years, not 750 years. The leap seconds do not affect what time our clocks register when roll over occurs, it only affects how soon that will occur. 199.125.109.118 16:57, 19 July 2007 (UTC)[reply]
I've added back the time on the epoch date and cited it to a definition page from the open group, you don't get much more authoritive than that. Plugwash (talk) —Preceding comment was added at 23:15, 22 January 2008 (UTC)[reply]

On an eMac ...

"On an eMac, January 2038 displays 38 days. You cannot go forward in months anymore. You can go forward to January 2039, though, and go a lot further. In January 2039, the faded numbers belonging to the previous month instead say "Na". This presumably stands for "not applicable", but this is normally abbreviated to "N/A"." Two things. First, does this violate no original research? There's no citation, and it 'presumes' things. Also, "Na" is most likely NaN, cut off at two characters (as one likely would to get '01' instead of just '1'). Trevor Bekolay 18:53, 24 July 2007 (UTC)[reply]

I agree. I've reverted the addition. ~a (usertalkcontribs) 19:19, 24 July 2007 (UTC)[reply]


1901?

Guest edit here, should it really be 1901 at the end of the first paragraph, or does that mean what it will go to after?

Yeah, 1901 refers to the date it will rollover to. I agree that the wording can be a bit confusing, so I added a bit to clarify. Trevor Bekolay 16:13, 8 August 2007 (UTC)[reply]
shouldn't it be 1970 - as it rolls over and the first year is 1970... ?
no, because it rolls over to a number that is minus 68 years from the epoc --UltraMagnus (talk) 10:44, 23 September 2009 (UTC)[reply]

In Fairness

Does everyone believe that this will happen? It is the same with the "Y2K problem" and the "10000 problem"...it will be fine!!! —Preceding unsigned comment added by 91.142.229.87 (talk) 19:52, 11 November 2007 (UTC)[reply]

The article states "Most operating systems for 64-bit architectures already use 64-bit integers in their time_t. The move to these architectures is already underway and many expect it to be complete before 2038." So no, I don't think anyone thinks this will cause widespread problems in 2038. For 32-bit architectures though, it's not really an issue of 'will it happen' but 'what problems will occur as a result of it.' Trevor Talk 19:14, 12 November 2007 (UTC)[reply]
Also how many apps are out there that don't use time_t but instead use int (which i belive is still 32 bit on some 64 bit systems) or worse int32_t? how many file formats store timestamps as a 32 bit integer (admittedly apps using those file formats will be able to buy a bit of time by redefining the timestamps as unsigned)?
I suspect that like with Y2K there will be a mad rush to fix the things that are still broken in the few years before 2038 and like with Y2K there will be very few problems left by the rollover date. Plugwash (talk) 14:14, 17 December 2007 (UTC)[reply]

I am no scientist but i find it humourous that from what i have read computer programers are being paid to "fix" the computers before the year 2000 and they are the ones saying it would happen right? "he clouds are going to fall out of the sky soon and crush the earth" "omg noooo!" "but for a small fee i can fix the sky above your house so that it won't happen to you" everybody gets "cloud fixed" and when the day he said it would happen on is here everybody thanks him for fixing the issue. 69.220.1.137 (talk) 20:14, 13 July 2008 (UTC)[reply]

Different issue; Y2k was, at worse, a coding problem that software enginners knew about for decades. The 2038 problem is an OS problem, which affects a great many programs. Moving to 64-bits native solves the OS problem, but any programs in use that use the 32-bit time value would still need to be patched. If the problem was 2012 instead of 2038, this would be far bigger then Y2k, simply because of how much of our backend uses Unix/Linux in particular, but we'll be well into the 64-bit [and possibly beyond] relm by 2038, and most critical software will hopefully have been updated by that time. —Preceding unsigned comment added by Gamerk2 (talkcontribs) 16:12, 2 March 2010 (UTC)[reply]

But seriusly, why people don't use unsigned time? I think that would really cause this problem to be for computers that may be too bad for use at that time. —Preceding unsigned comment added by RSXLV (talkcontribs) 14:10, 30 March 2010 (UTC)[reply]

  • Actually, I think that originally they did! The only problem was that, at the time, there was no unsigned type. There certainly was no time_t. The addition of unsigned to the C language did not result in correcting the include files. The C standard defined time_t, but did not require a specific type. (I wish I had my ANSI C spec handy, to see if it was even required to be arithmetic. I don't think it was.) In any case, I am pretty sure time_t being signed is historical accident and convention, not design. David Garfield (talk) 21:27, 23 August 2010 (UTC)[reply]

The wrap-around to negative can crash the app.

A function like ctime() that has tables and such internally can crash if passed a time_t, either too far in the future, or with a negative value. I only built with gcc 4.1.2 and MS Visual Studio Express 2005. The GNU compiler collection 32 bit compiler still uses a 32 bit time_t by default. Microsoft's time_t crashes ctime if negative or about a thousand years in the future. Most time formatting functions have tables built in for all the goofiness in the calendar, and when you index outside a table, you generally have a crash.

So, even though your signed 64 bit scalar value can represent times in the distant future and distant past, there is no guarantee that your application will display such times. Maybe it's an issue for a an astronomical application that predicts stellar positions far into the past and future, and wants to display a date/time that people can (sort of) relate to, along with 'Tuesday'.

Anyway, heads up: Even though you can reserve and store that 64 bit value, your application won't necessarily correctly display calendar times billions of years from now, when the sun is just a cold, dark cinder and the moon and Earth are part of it, or billions of years in the past, before the Earth even formed. Not that a Gregorian calendar would mean anything in either context.

I added code to the article to illustrate the problem and test the runtime library. Hope it helps. —Preceding unsigned comment added by 70.7.17.202 (talk) 20:08, 15 December 2007 (UTC)[reply]

Somebody (not me) removed your addition. I agree with the removal because your program is considered original thought. For inclusion on Wikipedia, you're supposed to include references to reliable sources. ~a (usertalkcontribs) 05:25, 19 December 2007 (UTC)[reply]

hmmmm..1999

Aparently, many sources relate 1999 to the year of the worlds end. Now all of us realize that 1999 already happend and we're all still here. But this discovery makes me think; maybe since things will be resetting to 1901, that the end of the world in fact could still be 1999, so that means, 98 years after 2038 (year 2136 = 1999) can potentially be the end ? —Preceding unsigned comment added by Tofu-miso (talkcontribs) 23:10, 13 January 2008 (UTC)[reply]

Example in practice

The Perl CGI::Cookie module, when using 32 bit dates, is readily subsceptible to this issue today in immediate practical application.

When setting cookies for distant dates in the future, it is quite common to specify some long distance arbitrary time. This is not, contrary to popular stupid opinion, just a spammer or advertiser thing. For instance, you might want to be able to display to a user how many unique visitors have viewed their profile. Raw hits will count refreshes, and IP addresses are misleading with dynamic IPs and shared IPs. The most practical definition of 'unique visitor' is therefore to store a random string cookie and this will at least give you a closer approximation and a more trackable number.

Since coders would like to rely on the internal mechanisms which allow you to set things in relative formats such as '+100y' for instance, which the Perl CGI::Cookie and CGI.pm modules claim to allow.

However, if you specify such a date too far in the future it will immediately fall prey to the Y2038 problem, and, because HTTP Cookies with an expiry set in the past will *immediately* expire and thus be deleted from the user's cookies, said cookie will effectively never be set.

Example in code:

   my %cookies = fetch CGI::Cookie;
   if (defined $cookies{visitor}) {
       $ENV{UNIQUE_VISITOR} = $cookies{visitor};
   }
   else {
       my $md5 = new Digest::MD5;
       $md5->add($$, time, $ENV{REMOTE_ADDR}, $ENV{REMOTE_HOST}, $ENV{REMOTE_PORT});
       my $unique_id = $md5->hexdigest;
       my $vcookie = CGI::Cookie->new(-name => 'visitor',
                                      -value => $unique_id,
                                      -expires => '+30y',
                                      -path => '/',
                                      -domain => ".$dom");
       $r->headers_out->add('Set-Cookie' => $vcookie);
       $ENV{UNIQUE_VISITOR} = $unique_id
   }
   $r->subprocess_env('UNIQUE_VISITOR', $ENV{UNIQUE_VISITOR});

The results:

200 OK
Connection: close
Date: Sun, 17 Feb 2008 23:41:42 GMT
Server: Apache/2.0.55 (Unix) PHP/5.0.5 mod_jk/1.2.23 mod_perl/2.0.2 Perl/v5.8.7
Content-Type: text/html; charset=ISO-8859-1
Client-Date: Sun, 17 Feb 2008 23:41:42 GMT
Client-Peer: 64.151.93.244:80
Client-Response-Num: 1
Set-Cookie: visitor=aaceff88e393bf4agrehf13e810a7a12a; \
  domain=.gothic-classifieds.com; path=/; expires=Sat, 04-Jan-1902 17:13:26 GMT
Set-Cookie: session=c8701b98426greg9a5d31117e5f01ae75; \
  domain=.gothic-classifieds.com; path=/; expires=Mon, 16-Feb-2009 23:41:42 GMT


Dropping it to '+29y', slightly before the Y2038 problem, yields instead:

Set-Cookie: visitor=40c0b19ab5greh873155f9af5c29df3bc; \
  domain=.gothic-classifieds.com; path=/; expires=Mon, 09-Feb-2037 23:42:45 GMT

This shows that this is not just a 'thing to think about in the future' but something that directly affects code right now. —Preceding unsigned comment added by 208.54.15.180 (talk) 00:20, 18 February 2008 (UTC)[reply]

This is a minor variation of the problem AOL already had that is mentioned in the article. Jon (talk) 18:17, 16 July 2008 (UTC)[reply]

Reworked intro

I significantly updated the intro paragraph to make a few things more clear:

  • It's the combination of storing time as a signed 32-bit integer AND interpreting this number as the number of seconds since 00:00:00 Jan 1, 1970
  • Every system that deals with time this way is affected, regardless of whether the system is "unix-like"
  • Removed some of the more technical details, like C and time_t, from the intro. These are discussed in more detail later.
  • Removed the more obscure "also known as" Y2K+38 and Y2.038K

I think these changes will make the introduction more understandable to first-time readers. netjeff (talk) 20:32, 4 October 2008 (UTC)[reply]

Don't see how this is a problem

Why don't they just add another few lines of binary to the date code? Should hold up for another few years that way, as the integer should last longer without "Reverting". —Preceding unsigned comment added by Kazturkey (talkcontribs) 10:51, 8 October 2008 (UTC)[reply]

More bits is, indeed, the solution, and is what is being done. (Some systems (e.g. Mac OS X) have supported 64 bit time for years.) The rub is that all of the relevant software needs to know to use the new system calls that support this. You can't just add bits to existing system calls, because the structures that are used to pass the data are of fixed geometries; and you can't magically make existing software use a wider structure. Each piece of software must be modified.

(Something that does puzzle me is that about everyone seems to think that it can't be done on a 32 bit CPU, or in a 32 bit OS—as if that ever stopped, for example, 8 bit computers from routinely manipulating 16 bit integers and 40 bit floats.)
überRegenbogen (talk) 12:48, 8 July 2009 (UTC)
[reply]

I'm confused about the type of time_t....

On i686 linux-2.6.xx, glibc 2.9 time_t is a (signed) long:

/* <bits/typesizes.h> */
#define __TIME_T_TYPE		__SLONGWORD_TYPE

/* <bits/types.h> */
#define __SLONGWORD_TYPE	long int.
/* ... */
__STD_TYPE __TIME_T_TYPE __time_t;	/* Seconds since the Epoch.  */

/* <time.h> */
typedef __time_t time_t;

But this program:

#include <stdio.h>
#include <time.h>

int
main(int argc, char** argv)
{
	time_t ticks = 2147483647;
	ticks += 2147483647;
	ticks += 1;
	printf ("SIZEOF time_t: %d bits\nMAX    time_t: %u\n", sizeof(ticks)*8, ticks);
	return 0;
}

...compiled with:

gcc --std=c99 -o foo foo.c

...or:

gcc --std=c89 -o foo foo.c

...outputs the following:

SIZEOF time_t: 32 bits
MAX    time_t: 4294967295

Shouldn't ticks be overflowing? Why it is getting promoted to an unsigned long? (I specifically didn't initialize it 4294967295U to make sure gcc didn't do anything sneaky like that).

I'm not sure why, but it appears that it's the "2106 bug" for current 32-bit linux implementations using the GNU toolchain? 24.243.3.27 (talk) 05:51, 24 November 2008 (UTC)[reply]

printf("%u\n", -1) will produce the same result. ticks has the value −1 at the time of the printf, but because you used %u instead of %d it was printed as an unsigned integer. -- BenRG (talk) 14:27, 24 November 2008 (UTC)[reply]
Doh! Thanks for spotting that. For some reason my brain put the %u specifier (DWIM!). Sorry for the noise. 24.243.3.27 (talk) 17:37, 25 November 2008 (UTC)[reply]

openssl example

Here is an example demonstrating something or other with openssl. It used to be part of the main article. It was moved here for posterity, as it is unclear to anybody unfamiliar with openssl exactly what this example is trying to demonstrate:

$ date
Su 6. Jul 00:32:27 CEST 2008 
$ openssl req -x509 -in server.csr -key server.key -out server.crt -days 10789 && openssl x509 -in server.crt -text | grep After
Not After : Jan 18 22:32:32 2038 GMT
$ openssl req -x509 -in server.csr -key server.key -out server.crt -days 10790 && openssl x509 -in server.crt -text | grep After
Not After : Dec 14 16:04:21 1901 GMT (32-Bit System)
$ openssl req -x509 -in server.csr -key server.key -out server.crt -days 2918831 && openssl x509 -in server.crt -text | grep After
Not After : Dec 31 22:41:18 9999 GMT (64-Bit System)

--24.190.217.35 (talk) 23:03, 16 December 2008 (UTC)[reply]

clock 'reset' in the example

I think a more appropriate word for what happens to Unix time in 2038 is 'roll over'. 'Reset' is generally taken to mean 'set to all zeros', which is clearly not what is happening. My crude calculation indicates that the current system, if left unchanged, will reset to zero 68 years after 2038, i.e. in 2106. But this would happen only if the world suffered with a disfunctional Unix time for 68 years without fixing it.

72.19.177.68 (talk) 17:40, 14 January 2009 (UTC)[reply]

Software engineering disasters???

Why is this in the 'Software engineering disasters' category? It's not a disaster its a known limitation. Same as 32 bit int can only hold a certain range of numbers, hardly a disaster. I would like to see this removed from this category. 203.134.124.36 (talk) 03:09, 11 February 2009 (UTC)[reply]

Most disasters are precipitated by bad assumptions. Of course we haven't got that close to Y2K38 but it looks to me like it's shaping up to be a repeat of Y2K with a meh attitude taken until the last minuite and then a panic to fix things in the last few years. Plugwash (talk) 23:23, 15 July 2009 (UTC)[reply]

TCP timestamp issue?

Will TCP also have a rollover problem in 2038 since it uses 4-byte times? Since we might not have migrated to IPv6 by then, won't this be another affected software component? -- Autopilot (talk) 23:18, 21 February 2009 (UTC)[reply]

No. TCP Timestamps do not use a specifified baseline (or "epoch") clock value, and the spec (RFC 11231323) has implementation requirements that directly address wraparound. RossPatterson (talk) 17:56, 22 February 2009 (UTC)[reply]
Ok. I don't see the discussion of wraparound in RFC 1123, however, other than the general robustness principle. Did you mean PAWS in RFC 1323? -- Autopilot (talk) 18:30, 22 February 2009 (UTC)[reply]
Sorry, typo on my part - yes, I meant 1323. The timestamps are indeed 32 bits, but as the RFC says, they are "obtained from a (virtual) clock that we call the "timestamp clock". Its values must be at least approximately proportional to real time,", rather than being a count of seconds since 1970-01-01T00:00:00. By way of illustration, as has been observed elsewhere, systems based on BSD Unix use a count of 1/2-second intervals since boot time, which meets the proportionality requirement and yet is unlikely to wrap - few systems run for 136 years between boots. RossPatterson (talk) 19:51, 22 February 2009 (UTC)[reply]

XKCD Reference

I've seen what happens to an article anytime the subject is mentioned in XKCD, and just wanted to let anyone working on this know that today's comic did just that for the Year 2038 problem. I'm not saying one way or the other whether or not it should be mentioned, just wanted to give a heads up in hopes that it will curtail the inevitable discussion that has taken place in many other Wikipedia articles. Scyclical (talk) 04:41, 8 July 2009 (UTC)[reply]

Removed xkcd reference, in accordance with other articles that have been vandalized by xkcd fans. See the discussion there instead, and remember, kids, there are other ways to display your fanboyism. Get a tattoo of a heart with the text "Randall, get our of my heart" or something. —Preceding unsigned comment added by 81.216.131.26 (talk) 17:31, 14 July 2009 (UTC)[reply]

IBM vague report

Yes, the article linked to at IBM says "Some sources also indicate" that ext4 fixes things until 2514, but they do not say who those sources are, and my reading of the source code gives me no indication of such a feature. Ext4 adds an extra 32 bits to inode timestamp fields to record the number of nanoseconds between seconds of the normal time stamps, but doesn't change the interpretation of the second count. Could someone else also read the source code and confirm my reading? Then please remove the reference to the misleading IBM article. —Preceding unsigned comment added by Thyrsus (talkcontribs) 02:05, 9 July 2009 (UTC)[reply]

mortgages

In 2008 slashdot ran a story "Y2K38 Watch Starts Saturday" http://it.slashdot.org/article.pl?sid=08/01/15/1928213 which speculated that some mortgage related software would have problems once it got to 30 years before the 2038 bug hits. Did anything come of this (a quick google doesn't seem to be turning anything up). Plugwash (talk) 23:06, 15 July 2009 (UTC)[reply]

40 year mortgages are not rare. — Arthur Rubin (talk) 00:05, 16 July 2009 (UTC)[reply]

Cleanup

I improved the grammar/style of this page. Lovek323 (talk) 02:57, 13 February 2010 (UTC)[reply]

Just wondering…

Thanks, Lovek323, for your cleanup of the tone of the article.

It seems to me that an advanced OS like Mac OS X are providing a layer of abstraction to dates. I just did a =NOW() in Excel and added 50 × 365.25 days to it. The resulting value returned March 26, 2060. Note that Excel doesn’t have a date format showing the day of the week (like “Friday, March 26, 2060”), so it only had to handle leap years; indeed, such a simple calendar function could be built into the application. However, if I simply plug the date “3‑26‑2060” into FileMaker Pro, it properly knows that it is a Friday. However, I also note that I cannot go to System Preferences>Date & Time and set a date greater than 12/31/2037.

The above leads me to strongly suspect that Apple’s OS X might rely only upon Unix’s time services for Finder activities such as dating files and behind-the-scenes housekeeping (such as periodically rebuilding the ‘whatis’ database). It clearly appears that Apple is using its own OS X‑based calendaring system for all properly written applications (those written in Carbon or Cocoa and which are compiled using compilers that call on Apple APIs) to call upon. This makes perfect sense. The Mac has long been used in astronomical mapping, where star and planet positions can be calculated and plotted for dates that are many centuries in the future. I’m confident that FileMaker and all these obscure astronomy applications intended for amateurs don’t have to make their own calendar work-arounds. It’s clear that Apple likely has a calendar API that works for dates many centuries from now.

This article could clearly benefit from the contributions from a programming expert. Greg L (talk) 00:41, 30 March 2010 (UTC)[reply]

Why didn't they use a Struct

[code]{ long long year; short int month; short int day; short int hour short int sec; short int millisec; short int microsec; short int picosec; short int ridiculoussec; }[/code] nabs. --79.130.4.132 (talk) 11:25, 3 August 2010 (UTC)[reply]

To save space and calculation time, I believe. That would make time_t a non_arithmetic type (at least in C; it could be overloaded for C++, at the expense of considerable time cost). This doesn't have much to do with improving the article, but it is a reasonable question.
Also, there is (or was, in the initial standard) no guarantee that "short int" be longer than 1 byte, making "millisec" inoperative. — Arthur Rubin (talk) 15:17, 3 August 2010 (UTC)[reply]
As Arthur Rubin says. For example, to calculate the "distance" between two dates you can make a simple substraction: date1 - date2 = distance. --Enric Naval (talk) 15:25, 3 August 2010 (UTC)[reply]