Powered by Blogger.
Wednesday, 3 December 2014

Desktop Lock Free Online Software Free Download By Soul Hacker

Generally, locks are advisory locks, where each thread cooperates by acquiring the lock before accessing the corresponding data. Some systems also implement mandatory locks, where attempting unauthorized access to a locked resource will force an in the entity attempting to make the access.
The simplest type of lock is a binary. It provides exclusive access to the locked data. Other schemes also provide shared access for reading data. Other widely implemented access modes are exclusive, intend-to-exclude and intend-to-upgrade.
Another way to classify locks is by what happens when the  prevents progress of a thread. Most locking designs  the  of the  requesting the lock until it is allowed to access the locked resource. With a , the thread simply waits ("spins") until the lock becomes available. This is efficient if threads are blocked for a short time, because it avoids the overhead of operating system process re-scheduling. It is inefficient if the lock is held for a long time, or if the progress of the thread that is holding the lock depends on preemption of the locked thread.
Locks typically require hardware support for efficient implementation. This support usually takes the form of one or more  instructions such as "". These instructions allow a single process to test if the lock is free, and if free, acquire the lock in a single atomic operation.
 architectures have the option of using of instructions ‒ using special instructions or instruction prefixes to disable interrupts temporarily ‒ but this technique does not work for  shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantialissues.
The reason an atomic operation is required is because of concurrency, where more than one task executes the same logic.
Before being introduced to lock granularity, one needs to understand three concepts about locks.
  • lock overhead: The extra resources for using locks, like the memory space allocated for locks, the CPU time to initialize and destroy locks, and the time for acquiring or releasing locks. The more locks a program uses, the more overhead associated with the usage.
  • lock contention: This occurs whenever one process or thread attempts to acquire a lock held by another process or thread. The more fine-grained the available locks, the less likely one process/thread will request a lock held by the other. (For example, locking a row rather than the entire table, or locking a cell rather than the entire row.)
  • deadlock: The situation when each of two tasks is waiting for a lock that the other task holds. Unless something is done, the two tasks will wait forever.                                                                                       

  • There is a tradeoff between decreasing lock overhead and decreasing lock contention when choosing the number of locks in synchronization.
    An important property of a lock is its granularity. The granularity is a measure of the amount of data the lock is protecting. In general, choosing a coarse granularity (a small number of locks, each protecting a large segment of data) results in less lock overhead when a single process is accessing the protected data, but worse performance when multiple processes are running concurrently. This is because of increased lock contention. The more coarse the lock, the higher the likelihood that the lock will stop an unrelated process from proceeding. Conversely, using a fine granularity (a larger number of locks, each protecting a fairly small amount of data) increases the overhead of the locks themselves but reduces lock contention. Granular locking where each process must hold multiple locks from a common set of locks can create subtle lock dependencies. This subtlety can increase the chance that a programmer will unknowingly introduce a deadlock.
    In a , for example, a lock could protect, in order of decreasing granularity, part of a field, a field, a record, a data page, or an entire table. Coarse granularity, such as using table locks, tends to give the best performance for a single user, whereas fine granularity, such as record locks, tends to give the best performance for multiple users.                                                            
  • Lock-based resource protection and thread/process synchronization have many disadvantages:
    • They cause blocking, which means some threads/processes have to wait until a lock (or a whole set of locks) is released. If one of the threads holding a lock dies, stalls/blocks or goes into any sort of infinite loop, other threads waiting for the lock may wait forever.
    • Lock handling adds overhead for each access to a resource, even when the chances for collision are very rare. (However, any chance for such collisions is a.)
    • Locks can be vulnerable to failures and faults that are often very subtle and may be difficult to reproduce reliably. One example is the , where (at least) two threads each hold a lock that the other thread holds and will not give up until it has acquired the other lock.
    • Lock contention limits scalability and adds complexity.
    • The optimal balance between lock overhead and contention can be unique to the problem domain (application) and sensitive to design, implementation, and even low-level system architectural changes. These balances may change over the life cycle of an application and may entail tremendous changes to update (re-balance).
    • Locks are only composable (e.g., managing multiple concurrent locks in order to atomically delete Item X from Table A and insert X into Table B) with relatively elaborate (overhead) software support and perfect adherence by applications programming to rigorous conventions.
    • . A low-priority thread/process holding a common lock can prevent high-priority threads/processes from proceeding can be used to prevent priority inversion. can be used to minimize priority-inversion duration.
    • Convoying. All other threads have to wait, if a thread holding a lock is descheduled due to a time-slice interrupt or page fault
    • Hard to debug: Bugs associated with locks are time dependent. They are extremely hard to replicate.
    Some  strategies avoid some or all of these problems. For example, a  or can avoid the biggest problem: deadlocks. Alternatives to locking include  methods, like  programming techniques and . However, such alternative methods often require that the actual lock mechanisms be implemented at a more fundamental level of the operating software. Therefore, they may only relieve the application level from the details of implementing locks, with the problems listed above still needing to be dealt with beneath the application.
    In most cases, proper locking depends on the CPU providing a method of atomic instruction stream synchronization (for example, the addition or deletion of an item into a pipeline requires that all contemporaneous operations needing to add or delete other items in the pipe be suspended during the manipulation of the memory content required to add or delete the specific item). Therefore, an application can often be more robust when it recognizes the burdens it places upon an operating system and is capable of graciously recognizing the reporting of impossible demands.                                         
  • Click Here To Download

  • Blogger Comments
  • Facebook Comments

0 comments:

Post a Comment

Thanks for your feedback.

Item Reviewed: Desktop Lock Free Online Software Free Download By Soul Hacker Description: Rating: 5 Reviewed By: Hassnain Ali
Scroll to Top