Wednesday, March 23, 2011

Powerlink - Basic Concept

Ethernet POWERLINK is a deterministic real-time protocol for standard Ethernet. It is an open protocol managed by the Ethernet POWERLINK Standardization Group (EPSG). It was introduced by Austrian automation company B&R in 2001.

This protocol has nothing to do with power distribution via Ethernet cabling or power over Ethernet (PoE), power line communication or Bang & Olufsens PowerLink cable.

OverviewEthernet POWERLINK was created taking care of standard-compliance. It expands Ethernet with a mixed Polling- and Timeslicing mechanism. That brings:

Guaranteed transfer of time-critical data in very short isochronic cycles with configurable response time
Time-synchronisation of all nodes in the network with very high precision of sub-microseconds
Transmission of less timecritical data in a reserved asynchronous channel
Modern implementations reach cycle-times of under 200 µs and a time-precision (jitter) of less than 1 µs.

StandardizationPOWERLINK is standardized by the opened user- and producer-group EPSG (Ethernet POWERLINK Standardization Group) as a public standard. EPSG was founded in June 2003 as an independent association. Its focus is to leverage the advantages of Ethernet for high performance Real-Time networking systems based on the Ethernet POWERLINK Real-Time protocol, introduced by B&R end of 2001. Various working groups are focusing on different tasks like safety, technology, marketing, certification and end users. The EPSG is cooperating with leading standardization bodies and associations, like the CAN in Automation (CiA) Group and the IEC.

Physical layer
The original physical layer specified was 100Base-X Fast Ethernet (IEEE 802.3). Since end of 2006 Ethernet Powerlink with Gigabit Ethernet supporting a transmission rate ten times higher (1,000 Mbit/s) is in use. That offers a better performance for the future to extended systems with major production performance, a series of module control systems, numerous drives and completely integrated security equipment. Gigabit Ethernet is on the threshold of general dissemination in IT systems. There is no need for major change in system designs, components or cabling. Networked units that master this high transmission rate and a somewhat better cable (Cat6) must be used. The transition to higher speed Ethernet variants is always possible since Ethernet Powerlink is also attached as a standard Ethernet design and can launch its applications with standard modules such as microcomputers and FPGA modules. Usage of repeating hubs instead of switches within the Real-time domain is recommended to minimise delay and jitter. Ethernet Powerlink uses IAONA's Industrial Ethernet Planning and Installation Guide for clean cabling of industrial networks and both industrial Ethernet connectors 8P8C (RJ45) and M1 are accepted.

Data Link Layer
The Standard Ethernet Data Link Layer of Ethernet Powerlink is extended by an additional bus scheduling mechanism which secures that at a time only one node is accessing the network. The schedule is divided into an isochronous phase and an asynchronous phase. During the isochronous phase, time-critical data is transferred, while the asynchronous phase provides bandwidth for the transmission of non time-critical data. The Managing Node (MN) grants access to the physical medium via dedicated poll request messages. As a result, only one single node (CN) has access to the network at a time, which avoids collisions, usually present on Standard Ethernet. The CSMA/CD mechanism of standard Ethernet, which causes non-deterministic Ethernet behaviour, is deactivated by the collision avoidance mechanism of the Ethernet Powerlink scheduling mechanism.

Basic CycleAfter system start-up is finished, the Real-Time domain is operating under Real-Time conditions. The scheduling of the basic cycle is controlled by the Managing Node (MN). The overall cycle time depends on the amount of isochronous data, asynchronous data and the number of nodes to be polled during each cycle.

The basic cycle consists of the following phases:

Start Phase: The Managing Node is sending out a synchronization message to all nodes. The frame is called SoC - Start of Cycle.
Isochronous Phase: The Managing Node calls each node to transfer time-critical data for process or motion control by sending the Preq - Poll Request - frame. The addressed node answers with the Pres - Poll Response - frame. Since all other nodes are listening to all data during this phase, the communication system provides a producer-consumer relationship.
The time frame which includes Preq-n and Pres-n is called time slot for the addressed node.

Asynchronous Phase: The Managing Node grants the right to one particular node for sending ad-hoc data by sending out the SoA - Start of Asynchronous - frame. The addressed node will answer with ASnd. Standard IP-based protocols and addressing can be used during this phase.
The quality of the Real-Time behaviour depends on the precision of the overall basic cycle time. The length of individual phases can vary as long as the total of all phases remain within the basic cycle time boundaries. Adherence to the basic cycle time is monitored by the Managing Node. The duration of the isochronous and the asynchronous phase can be configured.








Multiplex for Bandwidth Optimization

In addition to transferring isochronous data during each basic cycle, some nodes are also able to share transfer slots for better bandwidth utilization. For that reason, the isochronous phase can distinguish between transfer slots dedicated to particular nodes, which have to send their data in every basic cycle, and slots shared by nodes to transfer their data one after the other in different cycles. Therefore less important yet still time-critical data can be transferred in longer cycles than the basic cycle. Assigning the slots during each cycle is at the discretion of the Managing Node.

Real Time Systems

Real-Time Operating Systems are systems in which certain processes or operations have guaranteed minimum and/or maximum response times. That is to say, the system ensures that it will complete operation x after time t but before time t2, whatever t and t2 are, without fail, even at the expense of other lower priority operations.

Speed, in and of itself, is not critical; the primary goal is predictability. A response time less than t may be just as bad as, or worse than, one greater than t'.

Real time operating systems are, as a general rule, only used in time-dependent embedded applications; general-purpose systems rarely if ever need to meet real-time constraints.When the do occur, real-time considerations may rule out the use of some common techniques, such as virtual memory, which may make the system's behavior less deterministic. Best-effort real-time functionality can however be useful in general purposes systems for supporting the needs of applications like digital audio workstations which demand both reliability (live recording) and low latency (real-time synthesis/processing), often at the same time.

One of the best-known Real Time OS for the x86 platform is QnX. Each system call of QnX is documented with a 'worst case completion time'.

It should be noted that "being Real Time" not necessarily means that an OS is very good at playing MPEGs, or using hardware efficiently - this is a common misunderstanding. To the contrary, for a system to provide hard real-time services implies that it can use only a limited percentage of the system's resources, including CPU time. It also fundamentally changes how software can be built for the system. For example, Rate Monotonic Scheduling - a hard real-time scheduling algorithm - can guarantee time restraints only up to 70% CPU load. Beyond that, the system has "to hit the red button" because it can no longer guarantee anything.

This implies that applications have to state their run-time requirements beforehand - how often they must be called in a second, which maximum response time is acceptable etc. All this information must be provided by the application programmer. In some cases the information is provided in implicit form, for example by arranging processes to have a certain order of priorities which allows them to meet their goals.


Bottom line, hard real time is for industrial, medical, or military systems. On your average desktop, it is misplaced

(Copied from http://wiki.osdev.org/Real-Time_Systems )