Why just file locking in multi-user systems is not sufficient?

Ritchie claims that file locking is not sufficient to prevent the confusion caused by programs such as editors that make a copy of a file while editing and then write the original file when done. Can you explain what he meant?


He also said locks were not necessary, an assertion which most engineers consider to be untrue.

This was written by D.M. Ritchie and K. Thompson, The UNIX Time-Sharing System in The Bell System Technical Journal, Vol. 57, No. 6 (July-August 1978), Part 2, pp. 1905-1929.

The context of the remarks referred to the need for locking per the operating system's purposes. This was the era of Unix v6 (and maybe earlier) filesystems to provide file locking. Since the filesystem was not faced with large updateable databases, locking was not sufficient.

It was not necessary since the system kernel managed internal data structures with locks, and the system maintained "logical consistency" if two writers operated on the same file at once. I think the latter refers primarily to multiple processes appending to stdout or stderr.

Need Your Help

Will the performance of AppendToFile(string,IEnumerable) slow down as the CSV file grows large?

c# .net filehelpers

I am using the AppendToFile method of DelimitedFileEngine from FileHelpers.

Checking and extracting Absolute URLs for CSS, JS and IMG resources

javascript php html css parsing

I need to extract Absolute URLs from source code. Now, here is the problem, i am extracting URLs for following:

About UNIX Resources Network

Original, collect and organize Developers related documents, information and materials, contains jQuery, Html, CSS, MySQL, .NET, ASP.NET, SQL, objective-c, iPhone, Ruby on Rails, C, SQL Server, Ruby, Arrays, Regex, ASP.NET MVC, WPF, XML, Ajax, DataBase, and so on.