What is a fixed interval schedule of reinforcement?

Schedules of reinforcement are the precise rules that are used to present (or to remove) reinforcers (or punishers) following a specified operant behavior. These rules are defined in terms of the time and/or the number of responses required in order to present (or to remove) a reinforcer (or a punisher). Different schedules schedules of reinforcement produce distinctive effects on operant behavior.


Interval Schedule

Interval schedules require a minimum amount of time that must pass between successive reinforced responses (e.g. 5 minutes). Responses which are made before this time has elapsed are not reinforced. Interval schedules may specify a fixed time period between reinforcers (Fixed Interval schedule) or a variable time period between reinforcers (Variable Interval schedule).

Fixed Interval schedules produce an accelerated rate of response as the time of reinforcement approaches. Students' visits to the university library show a decided increase in rate as the time of final examinations approaches.

Variable Interval schedules produce a steady rate of response. Presses of the "redial" button on the telephone are sustained at a steady rate when you are trying to reach your parents and get a "busy" signal on the other end of the line.

Ratio Schedule

Ratio schedule require a certain number of operant responses (e.g., 10 responses) to produce the next reinforcer. The required number of responses may be fixed from one reinforcer to the next (Fixed Ratio schedule) or it may vary from one reinforcer to the next (Variable Ratio schedule).

Fixed Ratio schedules support a high rate of response until a reinforcer is received, after which a discernible pause in responding may be seen, especially with large ratios. Sales people who are paid on a "commission" basis may work feverously to reach their sales quota, after which they take a break from sales for a few days.

Variable Ratio schedules support a high and steady rate of response. The power of this schedule of reinforcement is illustrated by the gambler who persistently inserts coins and pulls the handle of a "one-armed bandit."

Extinction

A special and important schedule of reinforcement is extinction, in which the reinforcement of a response is discontinued. Discontinuation of reinforcement leads to the progressive decline in the occurrence of a previously reinforced response.

Glossary Index | Quotations

See page 308-10 of the book.

Operant conditioning recognized the importance of the scheduling of the various stimuli that were used. Skinner discovered that the constant provision of a stimulus was not the best way to modify behaviours. In the experiments with rats, he began to reward them intermittently, every fourth push of the lever, for example. The discovery was that this form of partial reinforcement was more effective in maintaining the desired behaviour in the rat.

This form of partial reinforcement is used in human psychology too. Think about pay as a reward for work - it is not continuous, but may be given at fixed times such as monthly (fixed interval reinforcement) or for a certain amount of work as in piece work (fixed ratio reinforcement). This gives rise to a number of schedules of reinforcement:

Continuous reinforcement The stimulus is constantly present, as with Pavlov's dogs - i.e. a reward for each press of the lever.
Fixed ratio reinforcement The stimulus is provided after a fixed number of presses of the lever, for example every fourth push. This can be seen in, for example, piece work or where bonuses are given for a certain number of sales.
Fixed interval reinforcement The stimulus is provided after a fixed amount of time - e.g. there is a reward every five minutes regardless of how many times the lever is pressed.
Variable ratio reinforcement The stimulus is not provided on every press of the lever, but the timing of the reward varies - it may be after two presses, then four, then three etc.
Variable interval reinforcement As with fixed interval reinforcement, the number of presses of the lever is irrelevant, however the timing of the reward now also varies - it may be after five minutes, then four minutes, then six etc.

The most effective forms of reinforcement have been shown to be those which have a variable element - that is to say an element of surprise and unpredictability. Variable schedules of reinforcement have been used to explain how gambling can become such a powerfully addictive behaviour. For example - if you knew that a fruit machine would pay out on every tenth press, there would be no incentive to play the first to ninth presses. But, because the ratios and intervals are random - we can never be sure when they will pay out - there is always the incentive to put more money in the machine. Indeed, pay-out ratios and intervals of such machines are designed with this aspect of human behaviour in mind.

OB Mod in the workplace

The use of behavioural science techniques in the workplace is known as organizational behaviour modification (or OB Mod, Luthans and Kreitner, 1985). We can see how pay might form a part of an OB Mod schedule as one particular stimulus. If you want someone to do something, reward them for it in their pay packet. We can probably think of a whole host of rewards that may be used in this way - status, conditions etc. - and punishments that may discourage us from a particular behaviour at work.

Pay also demonstrates the importance of blending schedules of reinforcement. Imagine if the only reinforcement were a monthly salary - what incentive would a worker have to do any more than collect their monthly cheque? Other types of reinforcement can be seen within the workplace. For example negative reinforcement, where people learn to avoid negative consequences such as disciplinary action for non-attendance or underperformance, or fixed ratio elements such as bonuses or performance-related pay.

See Villere and Hartman (1991: 21) for examples of different types of reinforcement schedule within the workplace.

References

Luthans, F. and Kreitner, R. (1985) Organizational behaviour modification and beyond. Scott Foresman, Glenview, IL.

Villere, M.F. and Hartman, S.S. (1991) 'Reinforcement theory: A practical tool', Leadership & OrganizationDevelopment Journal 12(2), pp. 27-31.

What is fixed interval reinforcement an example of?

A weekly paycheck is a good example of a fixed-interval schedule. The employee receives reinforcement every seven days, which may result in a higher response rate as payday approaches.

What is a fixed interval schedule and examples?

A fixed interval is a set amount of time between occurrences of something like a reward, result, or review. Some examples of a fixed interval schedule are a monthly review at work, a teacher giving a reward for good behavior each class, and a weekly paycheck.

What's a fixed interval?

Fixed interval reinforcement is a partial reinforcement schedule in which a response is rewarded based on whether it has been performed within a fixed interval of time. It's important to note that the first response is rewarded, not multiple responses.