One of the big research areas not quite complete in this topic is how to achieve flow and congestion control (or avoidance) for reliable multicast applications.
There are a number of options when confronted with a congested link or receiver, and sending to a group. It depends on the nature of the application which is the best option. The options can be seen as spanning a spectrum from sending at the slowest rate, sending at the average or median, and sending at the fastest.
It depends on the nature of error recovery, and the time-frame of the session, and the congestion, where we choose in this spectrum to run our algorithm.
For example, a reliable multicast protocol being used to pre-fetch data for Web caches is only a performance enhancement to normal web access, so can run at the average rate, and simply leave behind too slow receivers (and they can choose to leave the receiver group if they can't keep up). In fact, such an application could be pretty tolerant of the group changing throughout a session.
On the other hand, distributing a set of slides to a distributed class might require one to run at the minimum rate to be fair to the students.
The global congestion picture is not at all clear either for reliable multicast (or unreliable for that matter). For example, TCP's classic congestion control algorithm[#!van:88!#] achieves a level of fairness for relatively long lived unicast congestion, and even manages to allow the network to run at high utilisation. For a set of multicast ``connections'', it is not at all clear that the same scheme will be stable enough, since there are multiple senders and recipients, some sharing links - there may be multiplier effects that mean that an even more cautions back-off is needed than TCP's.