Tuesday 21 June 2011

MIS


CONTENTS

1.      What is a computer
2.      Why use computers
3.      Computer application areas
4.      Computer historical perspective
5.      Computer generations
6.      Classification of computers
7.      Data representation in computers
7.1.   Character coding systems
7.2.   Number systems
8.      Functional/logical parts of a digital computer
8.1.   Hardware
8.1.1.      Input devices
8.1.2.      Processing devices
8.1.3.      Output devices
8.1.4.      Auxiliary/secondary storage devices
8.1.5.      Communication devices
8.1.6.      Computer memory
8.2.   Software
8.2.1.      Systems software
8.2.2.      Application software
8.2.3.      Sources of software
8.2.4.      Programming languages
8.2.4.1.          Definition
8.2.4.2.          Generations of programming languages
8.2.4.3.Language translators
8.2.5.      Software trends and issues
9.      Data
9.1.   Information
9.1.1.Desirable qualities of information
9.2.   Data processing
9.3.   Computer files
10.  Terminology




INSTRUCTIONS
  1. Additional Reading: Chapter 5 of the study text
  2. Complete the Reinforcing Questions at the end of the lesson.
  3. Compare your answers to the models given in the revision section of the study pack.

1. What is a computer?
A computer is an information-processing machine. It may also be defined as a device that works under the control of stored programs automatically accepting, storing and processing data to produce information that is the result of that processing.

The forms of information processed include:

§  Data – e.g. invoices, sales ledger and purchase ledger, payroll, stock controls etc.
§  Text – widely available in many offices with microcomputers
§  Graphics – e.g. business graphs, symbols
§  Images – e.g. pictures
§  Voice – e.g. telephone
Processing includes creating, manipulating, storing, accessing and transmitting.


2. Why use computers?
Use of computers has become a necessity in many fields. Computers have revolutionized the way businesses are conducted. This is due to the advantages that computer systems offer over manual systems.  

The advantages include:

  • Speed – Computers have higher processing speeds than other means of processing, measured as number of instructions executed per second.
  • Accuracy – Computers are not prone to errors. So long as the programs are correct, they will always give correct output. A computer is designed in such a way that many of the inaccuracies, which could arise due to the malfunctioning of the equipment, are detected and their consequences avoided in a way, which is completely transparent to the user.
  • Consistency – Given the same data and the same instructions computers will produce exactly the same answer every time that particular process is repeated.
  • Reliability – Computer systems are built with fault tolerance features, meaning that failure of one of the components does not necessarily lead to failure of the whole system.
  • Memory capability – A computer has the ability to store and access large volumes of data.
  • Processing capability – A computer has the ability to execute millions of instructions per second.


3. Computer application areas
Some of the areas that computers are used include:

  • Communication – digital communication using computers is popular and is being adopted worldwide as opposed to analogue communication using the telephony system. Computers have also enhanced communication through email communication, electronic data interchange, electronic funds transfer, Internet etc. More specific examples include:

  • Banking – the banking sector has incorporated computer systems in such areas as credit analysis, fund transfers, customer relations, automated teller machines, home banking, and online banking.

  • Organizational management – the proliferation of management information systems have aided greatly the processes of managerial planning, controlling, directing as well as decision-making. Computers are used in organization for transaction processing, managerial control as well as decision-support. Other specific areas where computer systems have been incorporated include sales and marketing, accounting, customer service etc.

  • Science, research and engineering – computers used 
    • as research tools, complex computations
    • for simulation e.g. outer-space simulations, flight simulations
    • as diagnostic and monitoring tools,
    • computerized maps using global positioning satellite (GPS) technology
    • for modern mass production methods in the auto industry using computer driven technology

  • Education– computers incorporate databases of information that are useful in organizing and disseminating educational resources. Such E-learning and virtual or distributed classrooms have enabled the teaching industry to have a global reach to the students. Computers are also used for test scoring uniform tests done in schools, school administration and computer aided instructions.

  • Management of information materials- The Internet has massive reference material on virtually every learning area. Computer systems have enabled the efficient running of libraries for information storage and retrieval.

  • Manufacturing and production – computer aided design (CAD), computer integrated manufacturing (CIM), and process control systems among other technologies are computer systems that have revolutionized the production industry.

  • Entertainment – use of computers in the entertainment industry has increased tremendously over the years. Computers enable high-quality storage of motion pictures and music files using high-speed and efficient digital storage devices such as CDs, VCDs and DVDs. The Internet is also a great source of entertainment resources. Computer games have also become a major source of entertainment.

  • Retailing – computers are used in point of sale systems and credit card payment systems as well as stock inventories.

  • Home appliances – computers (especially embedded computers or microprocessors) are included in household items for reasons of economy and efficiency of such items. Major appliances such as microwave ovens, clothes washers, refrigerators and sewing machines are making regular use of microprocessors.

  • Reservation systems – guest booking, accommodation and bills accounting using computers in hotels have made the process to be more efficient and faster. Airline computer reservation systems have also enhanced and streamlined air travel across major airlines. Major players in the industry have also adopted online reservation systems.

  • Health care and medicine – computers have played an important role in the growth and improvement of health care that the use of computers in medicine has become a medical specialty in itself. Computers are used in such areas as maintenance of patient records, medical insurance systems, medical diagnosis, and patient monitoring.

4. History of Computers

The first electronic computers were produced in the 1940s. Since then, a series of breakthroughs in electronics have occurred leading to great improvements in the capacity, processing speed and quality of computer resources.

The evolution of computerization in business may be summarised as:

·         1870s: Development of the typewriter allows speedier communication and less copying.

·         1920s: Invention of the telephone enables both Wide Area Networks (WAN) and Local Area Networks (LAN) communication in real time. This marks the beginning of telecommunication.

·         1930s: Use of scientific management is made available to analyse and rationalise.

·         1940s: Mathematical techniques developed in World War II (operations research) are applied to the decision making process.

·         1950s: Introduction of copying facilitates cheap and faster document production, and the (limited) introduction of Electronic Data Processing (EDP) speeds up large scale transaction processing.

·         1960s: Emergence of Management Information Systems (MIS) provides background within which office automation can develop.

·         1970s: Setting up of telecommunication networks to allow for distant communication between computer systems. There is widespread use of word processors in text editing and formatting, advancement in personal computing- emergence of PCs. Use of spreadsheets.

·         1980s: Development of office automation technologies that combine data, text, graphics and voice. Development of DSS, EIS and widespread use of personal productivity software.

·         1990s: Advanced groupware; integrated packages, combining most of the office work- clerical, operational as well as management.

·         2000s: Wide spread use of Internet and related technology in many spheres of organisations including electronic commerce (e-commerce), e-learning, e-health

Landmark Inventions
  • ~500 B.C. - counting table with beads
  • ~1150 in China - ABACUS - beads on wires
  • 1642 Adding machine - Pascal
  • 1822 Difference machine/Analytic Engine - design by Babbage
  • 1890 Holerith punched card machine - for U.S. census               
  • 1944 Mark I (Harvard) - first stored program computer
  • 1947 ENIAC (Penn)- first electronic stored program computer
  • 1951 UNIVAC - first commercial computer; 1954 first installation
  • 1964 IBM - first all-purpose computer (business + scientific)
  • 1973 HP-65, hand-held, programmable ‘calculator’
  • ~1975 Altair, Intel - first Micro-computer; CPU on a “chip”


5. Computer Generations
The view of computers into generations is based on the fundamental technology employed. Each new generation is characterized by greater speed, larger memory capacity and smaller overall size than the previous one.

  1. First Generation Computers (1946 – 1957)
    • Used vacuum tubes to construct computers.
    • These computers were large in size and writing programs on them was difficult.
    • The following are major drawbacks of First generation computers.
o   The operating speed was quite slow.
o   Power consumption was very high.
o   It required large space for installation.
o   The programming capability was quite low.
o   Cumbersome to operate – switching between programs, input and output

  1. Second Generation Computers (1958 - 1964)
    • Replaced vacuum tubes with transistors.
    • The transistor is smaller, cheaper and dissipates less heat than a vacuum tube.
    • The second generation also saw the introduction of more complex arithmetic and logic units, the use of high – level programming languages and the provision of system software with the computer.
    • Transistors are smaller than electric tubes and have higher operating speed. They have no filament and require no heating. Manufacturing cost was also lower. Thus the size of the computer got reduced considerably.
    • It is in the second generation that the concept of Central Processing Unit (CPU), memory, programming language and input and output units were developed. The programming languages such as COBOL, FORTRAN were developed during this period.

  1. Third Generation Computers (1965 - 1971)
    • Had an integrated circuit.
    • Although the transistor technology was a major improvement over vacuum tubes, problems remained. The transistors were individually mounted in separate packages and interconnected on printed circuit boards by separate wires. This was a complex, time consuming and error-prone process.
    • The early integrated circuits are referred to as small-scale integration (SSI). Computers of this generation were smaller in size, lower cost, larger memory and processing speed was much higher.

  1. Fourth Generation Computers (1972 - Present)
    • Employ Large Scale Integrated (LSI) and Very Large Scale Integrated (VLSI) circuit technology to construct computers. Over 1,000 components can be placed on a single integrated-circuit chip.

  1. Fifth Generation Computers
    • These are computers of 1990s
    • Use Very Large Scale Integrated (VLSI) circuit technology to build computers. Over 10,000 components can be incorporated on a single integrated chip.
    • The speed is extremely high in fifth generation computer. Apart from this it can perform parallel processing. The concept of Artificial intelligence has been introduced to allow the computer to take its own decision.

6. Classification of computers
Computers can be classified in different ways as shown below:

Classification by processing
This is by how the computer represents and processes the data.

a)      Digital computers are computers which process data that is represented in the form of discrete values by operating on it in steps. Digital computers process data represented in the form of discrete values like 0, 1, 2. They are used for both business data processing and scientific purposes since digital computation results in greater accuracy.

b)      Analogue computers are used for scientific, engineering, and process-controlled purposes. Outputs are represented in the form of graphs. Analogue computers process data represented by physical variables and output physical magnitudes in the form of smooth graphs.

c)      Hybrid computers are computers that have the combined features of digital and analogue computers. They offer an efficient and economical method of working out special problems in science and various areas of engineering.

Classification by purpose
This is a classification by the use to which the computer is put.

a)      Special purpose computers are used for a certain specific function e.g. in medicine, engineering, manufacturing.

b)      General-purpose computers can be used for a wide variety of tasks e.g. accounting, word processing

Classification by generation
This is a time-based classification coinciding with technological advances.
The computers are categorized as First generation through to Fifth generation.

a)      First generation. Computers of the early 1940s. Used a circuitry of wires and vacuum tubes. Produced a lot of heat, took a lot of space, were very slow and expensive. Examples are LEO 1 and UNIVAC 1.

b)      Second generation. Computers of the early 1950s. Made use of transistors and thus were smaller and faster. (200KHz). Examples include the IBM system 1000.

c)      Third generation. Computers of the 1960s. Made use of Integrated Circuits. Speeds of up to 1MHz. Examples include the IBM system 360.

d)     Fourth generation. Computers of the 1970s and 1980s. Used Large Scale Integration (LSI) technology. Speeds of up to 10MHz. Examples include the IBM 4000 series.

e)      Fifth generation. Computers of the 1990s. Use Very Large Scale Integration (VLSI) technology and have speeds up to 400MHz and above.

Classification by power and size/ configuration

a)      Supercomputers. the largest and most powerful. Used to process large amounts of data very quickly. Useful for meteorological or astronomical applications. Examples include Cray and Fujitsu.

b)      Mainframe computers. Large computers in terms of price, power and size. Require a carefully controlled environment and specialist staff to operate them used for centralized processing for large commercial organizations. Manufacturers include International Business Machine (IBM).

c)      Minicomputers. Their size, speed and capabilities lie somewhere between mainframes and microcomputers. Used as departmental computers in large organizations or as the main computer in medium-sized organizations. Manufacturers of minicomputers include IBM and International Computer Limited (ICL).

d)     Microcomputers. These are the personal computers commonly used for office and leisure activities. Examples include Hewlett Packard (HP), Compaq and Dell.  They include desktops, laptops and palmtops.


7. Data representation in computers
Data exists as electrical voltages in a computer.  Since electricity can exist in 2 states, on or off, binary digits are used to represent data.  Binary digits, or bits, can be “0” or “1”. The bit is the basic unit of representing data in a digital computer.

A bit is either a 1 or a 0. These correspond to two electronic/magnetic states of ON (1) and OFF (0) in digital circuits which are the basic building blocks of computers. All data operated by a computer and the instructions that manipulate that data must be represented in these units. Other units are a combination of these basic units. Such units include:

  • 1 byte (B)  = 23 bits = 8 bits – usually used to represent one character e.g. ‘A’

·         1 kilobyte (KB) – 210 bytes = 1024 bytes (usually considered as 1000 bytes)

  • 1 megabyte (MB)– 220 bytes = 1048576 bytes (usually considered as 1000000 bytes/1000 KB)
  • 1 gigabyte (GB)– 230 bytes = 1073741824 bytes (usually considered as 1,000,000,000 bytes/1000 MB)
  • 1 terabyte (TB) – 240 bytes = 1099511627776 bytes (usually considered as one trillion bytes/1000 GB)

Bit patterns (the pattern of 1s or 0s found in the bytes) represent various kinds of data:
  • Numerical values (using the binary number system)
  • Text/character data (using the ASCII coding scheme)
  • Program instructions (using the machine language)
  • Pictures (using such data formats as gif, jpeg, bmp and wmf)
  • Video (using such data formats as avi, mov and mpeg)
  • Sound/music (using such data formats as wav, au and mp3)

Computer data is represented using number systems and either one of the character coding schemes.

Character Coding Schemes
(i)                 ASCII – American Standard Code for Information Interchange
ASCII (American Standard Code for Information Interchange) is the most common format for text files in computers and on the Internet. In an ASCII file, each alphabetic, numeric, or special character is represented with a 7-bit binary number (a string of seven 0s or 1s). 128 possible characters are defined.

Unix and DOS-based operating systems use ASCII for text files. Windows NT and 2000 uses a newer code, Unicode. IBM's S/390 systems use a proprietary 8-bit code called EBCDIC. Conversion programs allow different operating systems to change a file from one code to another. ASCII was developed by the American National Standards Institute (ANSI).

(ii)              EBCDIC
EBCDIC is a binary code for alphabetic and numeric characters that IBM developed for its larger operating systems. It is the code for text files that is used in IBM's OS/390 operating system for its S/390 servers and that thousands of corporations use for their legacy applications and databases. In an EBCDIC file, each alphabetic or numeric character is represented with an 8-bit binary number (a string of eight 0's or 1's). 256 possible characters (letters of the alphabet, numerals, and special characters) are defined.

(iii)            Unicode
Unicode is an entirely new idea in setting up binary codes for text or script characters. Officially called the Unicode Worldwide Character Standard, it is a system for "the interchange, processing, and display of the written texts of the diverse languages of the modern world." It also supports many classical and historical texts in a number of languages.


Number Systems
(i)                 Decimal system (base 10)
This is the normal human numbering system where all numbers are represented using base 10.The decimal system consists of 10 digits namely 0 to 9. This system is not used by the computer for internal data representation. The position of a digit represents its relation to the power of ten.
E.g. 45780 = {(0×100) + (8×101) + (7×102) + (5×103) + (4×104)}


(ii)              Binary system (base 2)
This is the system that is used by the computer for internal data representation whereby numbers are represented using base 2. Its basic units are 0 and 1, which are referred to as BITs (BInary digiTS). 0 and 1 represent two electronic or magnetic states of the computer that are implemented in hardware. The implementation is through use of electronic switching devices called gates, which like a normal switch are in either one of two states: ON (1) or OFF (0).

The information supplied by a computer as a result of processing must be decoded in the form understandable to the user.

E.g. Number 15 in decimal is represented as 1111 in binary system:
1111 = {(1×20) + (1×21) + (1×22) + (1×23)}
      = 1 + 2 + 4 + 8 = 15

(iii)            Octal system (base 8)
Since binary numbers are long and cumbersome, more convenient representations combine groups of three or four bits into octal (base 8) digits respectively. In octal number system, there are only eight possible digits, that is, 0 to 7. This system is more popular with microprocessors because the number represented in octal system can be used directly for input and output operations. Complex binary numbers with several 1’s and 0’s can be conveniently handled in base eight. The binary digits are grouped into binary digits of threes and each group is used to represent an individual octal digit.

For example: the binary number 10001110011 can be handled as 2163 octal number.

That is       010      001      110      011

                                                                                                     
                    2          1          6          3

(iv)             Hexadecimal (base 16)
The hexadecimal number system is similar to octal system with the exception that the base is 16 and there must be 16 digits.  The sixteen symbols used in this system are the decimal digits 0 to 9 and alphabets A to F. Hexadecimal numbers are used because more complex binary notations can be simplified by grouping the binary digits into groups of four each group representing a hexadecimal digit. For example the binary number 0001.0010.1010.0000 can be handled in base 16 as 12A0.

That is       0001    0010    1010    0000   















                   
  1           2         A       0



8. Functional/Logical parts of a digital computer
The system unit houses the processing components of the computer system.  All other computer system devices are called peripherals, and are connected directly or indirectly into the system unit. 





















·         Input devices – Enters program and data into computer system.
·         Central Processing Unit (CPU) – This is the part of the computer that processes data. Consists of main memory, the control unit and the arithmetic and logic unit.
·         Main Memory – Temporary storage to hold programs and data during execution/ processing.
·         Control Unit – Controls execution of programs.
·         Arithmetic Logic Unit (ALU) – Performs actual processing of data using program instructions.
  • Output devices – Displays information processed by the computer system.
  • Storage devices – Permanent storage of data and programs before and after it is processed by the computer system.
  • Communication devices – Enable communication with other computers.




8.1 Hardware
Refers to the physical, tangible computer equipment and devices, which provide support for major functions such as input, processing (internal storage, computation and control), output, secondary storage (for data and programs), and communication.

Hardware categories
A computer system is a set of integrated devices that input, output, process, and store data and information. Computer systems are currently built around at least one digital processing device. There are five main hardware components in a computer system:  the central processing unit (CPU); primary storage (main memory); secondary storage; and input and output devices.

Basic elements of hardware
The basic elements that make up a computer system are as follows:

a)      Input
Most computers cannot accept data in forms customary to human communication such as speech or hand-written documents. It is necessary, therefore, to present data to the computer in a way that provides easy conversion into its own electronic pulse-based forms. This is commonly achieved by typing data using the keyboard or using an electronic mouse or any other input device.

·         Keyboard can be connected to a computer system through a terminal. A terminal is a form of input and output device. A terminal can be connected to a mainframe or other types of computers called a host computer or server. There are four types of terminals namely dumb, intelligent, network and Internet.

·         Dumb Terminal
-          Used to input and receive data only.
-          It cannot process data independently.
-          A terminal used by an airline reservation clerk to access a mainframe computer for flight information is an example of a dumb terminal
·         Intelligent Terminal
-          Includes a processing unit, memory, and secondary storage. 
-          It uses communications software and a telephone hookup or other communications link.
-          A microcomputer connected to a larger computer by a modem or network link is an example of an intelligent terminal.
·         Network Terminal
-          Also known as a thin client or network computer. 
-          It is a low cost alternative to an intelligent terminal. 
-          Most network terminals do not have a hard drive. 
-          This type of terminal relies on a host computer or server for application or system software.
·         Internet Terminal
-          Is also known as a web terminal. 
-          It provides access to the Internet and displays web pages on a standard television set. 
-          It is used almost exclusively in the home.

  • Direct data entry devices – Direct entry creates machine-readable data that can go directly to the CPU. It reduces human error that may occur during keyboard entry. Direct entry devices include pointing, scanning and voice-input devices.

Pen input devices e.g. Lightpen
Pen input devices are used to select or input items by touching the screen with the pen.  Light pens accomplish this by using a white cell at the tip of the pen.  When the light pen is placed against the monitor, it closes a photoelectric circuit. The photoelectric circuit identifies the spot for entering or modifying data. Engineers who design microprocessor chips or airplane parts use light pens.

Touch sensitive screen inputs
Touch sensitive screens, or touch screens, allow the user to execute programs or select menu items by touching a portion of a special screen.  Behind the plastic layer of the touch screen are crisscrossed invisible beams of infrared light. Touching the screen with a finger can activate actions or commands. Touch screens are often used in ATMs, information centres, restaurants, and or stores. They are popularly used at gas stations for customers to select the grade of gas or request a receipt at the pump (in developed countries), as well as in fast-food restaurants to allow clerks to easily enter orders.

                 ii.            Scanning Devices
Scanning devices, or scanners, can be used to input images and character data directly into a computer.  The scanner digitises the data into machine-readable form.

               iii.            The scanning devices used in direct-entry include the following:
·         Image Scanner    – converts images on a page to electronic signals.
·         Fax Machine – converts light and dark areas of an image into format that can be sent over telephone lines.
·         Bar-Code Readers – photoelectric scanner that reads vertical striped marks printed on items.
·         Character and Mark Recognition Devices – scanning devices used to read marks on documents.

  Character and Mark Recognition Device Features
·         Can be used by mainframe computers or powerful microcomputers.
·         There are three kinds of character and mark recognition devices:
-          Magnetic-ink character recognition (MICR)
Magnetic ink character recognition, or MICR, readers are used to read the numbers printed at the bottom of checks in special magnetic ink.  These numbers are an example of data that is both machine readable and human readable.  The use of MICR readers increases the speed and accuracy of processing checks.

            -     Optical-character recognition (OCR)
Read special preprinted characters, such as those on utility and telephone bills.

-          Optical-mark recognition (OMR)
Reads marks on tests – also called mark sensing. Optical mark recognition readers are often used for test scoring since they can read the location of marks on what is sometimes called a mark sense document.  This is how, for instance, standardized tests, such as the KCPE, SAT or GMAT are scored.

                iv.            Voice–input devices
Voice-Input Devices can also be used for direct input into a computer. Speech recognition can be used for data input when it is necessary to keep your hands free. For example, a doctor may use voice recognition software to dictate medical notes while examining a patient.  Voice recognition can also be used for security purposes to allow only authorized people into certain areas or to use certain devices. 

·         Voice-input devices convert speech into a digital code.
·         The most widely used voice-input device is the microphone.
·         A microphone, sound card, and software form a voice recognition system.

Note:
Point-of-sale (POS) terminals (electronic cash registers) use both keyboard and direct entry.
·         Keyboard Entry can be used to type in information.
·         Direct Entry can be used to read special characters on price tags.

Point-of-sale terminals can use wand readers or platform scanners as direct entry devices.
·         Wand readers or scanners reflect light on the characters.
·         Reflection is changed by photoelectric cells to machine-readable code.
·         Encoded information on the product’s barcode e.g. price appear on terminal’s digital display.


b)      Storage
Data and instructions enter main storage, and are held until needed to be worked on. The instructions dictate action to be taken on the data. Results of the action will be held until they are required for output.

c)      Control
Each computer has a control unit that fetches instructions from main storage, interprets them, and issues the necessary signals to the components making up the system. It directs all hardware operations necessary in obeying instructions.





d)     Processing
Instructions are obeyed and the necessary arithmetic and logic operations are carried out on the data. The part that does this is called the Arithmetic and Logic Unit (ALU).


Processing devices
(i) The CPU (Central Processing Unit)
The CPU (Central Processing Unit) controls the processing of instructions. The CPU produces electronic pulses at a predetermined and constant rate. This is called the clock speed. Clock speed is generally measured in megahertz, that is, millions of cycles per second.

It consists of:
    • Control Unit (CU) – The electronic circuitry of the control unit accesses program instructions, decodes them and coordinates instruction execution in the CPU.
    • Arithmetic and Logic Unit (ALU) – Performs mathematical calculations and logical comparisons.
    • Registers – These are high-speed storage circuitry that holds the instruction and the data while the processor is executing the instruction.
    • Bus – This is a highway connecting internal components to each other.
(ii) Main Memory
Primary storage, also called main memory, although not a part of the CPU, is closely related to the CPU.  Main memory holds program instructions and data before and after execution by the CPU.  All instructions and data pass through main memory locations.  Memory is located physically close to the CPU to decrease access time, that is, the time it takes the CPU to retrieve data from memory.  Although the overall trend has been increased memory access time, memory has not advanced as quickly as processors. Memory access time is often measured in milliseconds, or one thousandths of a second.

e)      Output
Results are taken from main storage and fed to an output device. This may be a printer, in which case the information is automatically converted to a printed form called hard copy or to a monitor screen for a soft copy of data or information.

Output devices
Output is human-readable information. Input (data) is processed inside the computer’s CPU into meaningful output (information). Output devices translate the machine-readable information into human-readable information.
  • Punched cards: characters are coded onto an 80-column card in columns by combining punches in different locations; a special card reader reads the cards and translates them into transactions for the computer. These are now used only for older applications.
  • Paper tape punch
Printers
 – Outputs printout on paper often referred to as hard-copy output.
Categorized according to:
(i)         Printing capacity
o         Character printers – Print one character at a time.
o         Line printers – Print one line at a time.
o         Page printers – Print a whole page at a time.
(ii)        Mode of printing
o         Dot matrix printers
Forms images via pins striking a ribbon against a paper. The print head typically have 9 or 24 pins. The images are relatively of poor quality since dots are visible upon close inspection. Though inexpensive compared to other types, they are noisy and low-end models are slow (speed varies with price).
o         Ink jet printers
Forms images by “shooting” tiny droplets of ink on paper. They offer relatively good image quality with so many small dots that they are not noticeable, even upon close inspection. They are relatively quiet compared to dot matrix and most can print colour images.
o         Laser jet printers
Forms images using copier technology – a laser/LED (Light Emitting Diode) lights up dots to be blackened and toner sticks to these dot positions on the paper. They have excellent image quality – so many small dots that they are not noticeable, even upon close inspection. They are quieter than ink jet printers.
o   Thermal Printers
Forms images using heat elements and heat – sensitive paper. It is very quiet and not widely used by home PC users. Some very expensive colour models are available. “Ink” in these computers is wax crayons.
Plotters 
Plotters are typically used for design output.  They are special-purpose output devices used to produce charts, maps, architectural drawings and three-dimensional representations. They can produce high-quality multi-colour documents or larger size documents. Plotters produce documents such as blueprints or schematics.
     


Monitors
– Output device for soft-copy output (temporal screen display of output which lasts as long as the monitor’s power is on). They are the most frequently used output devices. Some are used on the desktop; others are portable. Two important characteristics of the monitor are size and clarity.

Voice-output devices
·         Voice-output devices make sounds that resemble human speech.
·         Voice-output devices use prerecorded vocalized sounds to produce output.
·         The computer “speaks” synthesized words.
·         Voice output is not as difficult to create as voice input.
·         Most widely used voice-output devices are stereo speakers and headphones.
·         Devices are connected to a sound card in the system unit.
·         Sound card is used to capture sound as well as play it back.

      Examples of voice output uses:
·         Soft-drink machines, the telephone, and in cars.
·         Voice output can be used as a tool for learning.
·         Can help students study a foreign language.
·         Used in supermarkets at the checkout counter to confirm purchases.
·         Most powerful capability is to assist the physically challenged.

Auxiliary/Secondary Storage devices
Secondary storage devices store a larger amount of data or instructions than does main memory, on a more permanent basis.  On a per megabyte basis, secondary storage is also cheaper than primary storage.  Secondary storage is also infinitely extendable, unlike main memory, which is finite.  Secondary storage is not volatile.  Secondary storage is also more portable than primary storage – that is, it is possible to remove it from a computer and use the device and its contents in another.

Types of secondary storage devices
·         Magnetic disks – Stores bits as magnetic spots. Magnetic disks are similar to magnetic tapes in that areas are magnetized to represent bits.  However the disks’ read/write head can go directly to the desired record, allowing fast data retrieval. Magnetic disks can range from small and portable, such as diskettes with 1.44MB of storage capacity, to large capacity fixed hard disks, which are more expensive and less portable.
o         Floppy disks (diskettes)
§  5 ¼ floppy disks
§  3 ½ floppy disks – The most common size with a capacity of 1.44 MB. They are not very fast and durable.    
o         Hard disks/Fixed disks – Also called hard drives. Their capacity range from 20 to 120 GB. They are fast and durable though not foolproof. Most are internal, but disks that use removable cartridge are available. Disk compression can be used to increase capacity but slows performance.
·         Optical Disks – Store bits as “pits” and “lands” on surface of disk that can be detected (read) by a laser beam.
      • CD-ROM (Compact-Disk Read Only Memory) – Only read and cannot be erased for rewriting. Has a capacity of 650 MB
      • CD-R (Compact-Disk Recordable) / WORM (Write Once, Read Many) – Usually blank at first and can be written only once. Has a capacity of 650 MB
      •  CD-RW (Compact Disk ReWritable) – Can written and read more than once. Has a capacity of 650 MB.
      • DVD-ROM (Digital Video Disks) – They are similar to CDs except that it has high quality sound and high-resolution video. Has a normal capacity of 4.7 GB and up to 17 GB if double-sided with double layering. Uses laser technology. They are a relatively new technology usually used in the entertainment industry.
·         Magnetic Tapes – Magnetic tape is similar in composition to the kind of tape found in videotapes and audiotapes.  A plastic film is coated with iron oxide, which is magnetized to represent bits.
o   Tape cartridges – Used in personal computers. Has up to 20 GB per tape (probably even more).
o   Tape reels – Used in minicomputers and mainframes.
·         Other Backup Options
o   Zip drive/disk – Uses special diskettes that hold 100 MB, 250 MB or 750 MB
o   SyQuest drive – Uses special cartridges that hold 200 MB
·         RAID - RAID stands for redundant arrays of independent or inexpensive disks. RAID technology is fault tolerant; that is, it allows data to be stored so that no data or transactions are lost in the event of disk failure. RAID involves using multiple hard disks in a special controller unit and storing data across all the disks in conjunction with extra reconstruction information that allows data to be recovered if a hard disk fails.

·         Storage Area Network (SAN) – A storage area network connects servers and storage devices in a network to store large volumes of data. Data stored in a storage area network can be quickly retrieved and backed up.  The use of storage area networks is likely to increase in the near future.

·         Computer Output Microfilm (COM) -Companies that must store significant numbers of paper documents often use computer output microfilm.  These devices transfer data directly from the computer onto the microfilm, thus eliminating the intermediate step of printing the document on paper.  Newspapers and journals typically archive old issues in this manner, although some are now using optical storage devices.

Storage capacity abbreviations
  • KB - kilobyte - 1000 (thousand)
  • MB - megabyte - 1,000,000 (million)
  • GB - gigabyte - 1,000,000,000 (billion)
  • TB - terabyte - 1,000,000,000,000 (trillion)
Communication devices
  • Modem - Modems allow computers (digital devices) to communicate via the phone system (based on analog technology). It turns the computers digital data into analog, sends it over the phone line, and then another modem at the other end of the line turns the analog signal back into digital data.
  • Fax/modem - basic digital/analog modem enhanced with fax transmission hardware that enables faxing of information from computer to another fax/modem or a fax machine (NOTE: a separate scanner must be connected to the computer in order to use the fax/modem to transfer external documents)
Computer Memory
Memory capability is one of the features that distinguish a computer from other electronic devices. Like the CPU, memory is made of silicon chips containing circuits holding data represented by on or off electrical states, or bits.  Eight bits together form a byte.  Memory is usually measured in megabytes or gigabytes.

A  kilobyte is roughly 1,000 bytes.  Specialized memories, such as cache memories, are typically measured in kilobytes.  Often both primary memory and secondary storage capacities today contain megabytes, or millions of bytes, of space.

Types of Memory
                          Volatile                                                          Non Volatile







  1. RAM (Random Access Memory) /RWM (Read Write Memory) – Also referred to as main memory, primary storage or internal memory. Its content can be read and can be changed and is the working area for the user. It is used to hold programs and data during processing. RAM chips are volatile, that is, they loose their contents if power is disrupted. 

Typical sizes of RAM include 32MB, 64MB, 128MB, 256MB and 512MB.
    1. EDO – Extended Data Out
    2. DRAM – Dynamic RAM
    3. SDRAM – Synchronous

  1. ROM (Read Only Memory) – Its contents can only be read and cannot be changed. ROM chips is non-volatile, so the contents aren’t lost if the power is disrupted.  ROM provides permanent storage for unchanging data & instructions, such as data from the computer maker. It is used to hold instructions for starting the computer called the bootstrap program.
ROM: chips, the contents, or combination of electrical circuit states, are set by the manufacturer and cannot be changed. States are permanently manufactured into the chip.

PROM: the settings must be programmed into the chip.  After they are programmed, PROM behaves like ROM – the circuit states can’t be changed.  PROM is used when instructions will be permanent, but they aren’t produced in large enough quantities to make custom chip production (as in ROM) cost effective.  PROM chips are, for example, used to store video game instructions.

Instructions are also programmed into erasable programmable read-only memory.  However, the contents of the chip can be erased and the chip can be reprogrammed.  EPROM chips are used where data and instructions don’t change often, but non-volatility and quickness are needed.  The controller for a robot arm on an assembly line is an example of EPROM use.

    1. PROM (Programmable Read Only Memory) – It is written onto only once using special devices. Used mostly in electronic devices such as alarm systems.
    2. EPROM (Erasable Programmable Read Only Memory) –Can be written onto more than once.

3.  Cache Memory - Cache memory is high-speed memory that a processor can access more quickly than RAM.  Frequently used instructions are stored in cache since they can be retrieved more quickly, improving the overall performance of the computer.  Level 1 (L1) cache is located on the processor; Level 2 (L2) cache is located between the processor and RAM.




8.2  Software
Software is detailed step-by-step sequence of instructions known as program which guide computer hardware. A computer program is a sequence of instructions that tell the computer hardware what to do.  Programs are written in programming languages, which consists of a set of symbols combined according to a given syntax.

A program must be in main memory (RAM) to be executed. These invisible, intangible components of a computer that direct and control the operations of the hardware when processing data are referred to as software.

Software is classified into two major types: system and application software.

System software
Systems software consists of programs that  coordinates the activities of hardware and other programs. System software is designed for a specific CPU and hardware class. The combination of a particular hardware configuration and operating system is called a computer platform. These programs manage the "behind the scenes" operation of the computer.
 
Examples
  • Operating systems
  • Utility Programs - Utility programs often come installed on computer systems or packaged with operating systems.  Utilities can also be purchased individually.  Utility programs perform useful tasks, such as virus detection, tracking computer jobs, and compressing data.
  •  
  • Language processors – Compilers and interpreters
Operating systems
The functions of an operating system includes:
  • Perform common hardware functions
- Accept input and store data on disks and send data to output devices
  • Provide a user interface
  • Provide hardware independence
  • Manage system memory
  • Manage processing
  • Control access to system resources
- Protection against unauthorized access
- Logins and passwords
  • Manage files
- Physical storage location
      - File permissions
      - File access

Examples of operating systems include:
  • DOS – Disk operating system
  • Windows 3.1, 95, 98, NT, 2000, ME, XP
  • Linux, Unix, MAC OS, System/7
Application software
Applications software includes programs designed to help end users solve particular problems using the computer or to perform specific tasks. 

Sources of software



















Advantages of proprietary software
  • You can get exactly what you need in terms of reports, features etc.
  • Being involved in development offers a further level in control over results.
  • There is more flexibility in making modifications that may be required to counteract a new initiative by a competitor or to meet new supplier or customer requirements. A merger with another firm or an acquisition will also necessitate software changes to meet new business needs.

Disadvantages of proprietary software
  • It can take a long time and significant resources to develop required features.
  • In house system development staff may become hard pressed to provide the required level of ongoing support and maintenance because of pressure to get on to other new projects.
  • There is more risk concerning the features and performance of the software that has yet to be developed.

Advantages of off-the-shelf software
  • The initial cost is lower since the software firm is able to spread the development costs over a large number of customers.
  • There is lower risk that the software will fail to meet the basic business needs
        you can analyse existing features and performance of the package
  • Package is likely to be of high quality since many customer firms have tested the software and helped identify many of its bugs.

Disadvantages of off-the-shelf software
  • An organization may have to pay for features that are not required and never used.
  • The software may lack important features, thus requiring future modifications or customisation. This can be very expensive because users must adopt future releases of the software.
  • Software may not match current work processes and data standards.

Application software is further classified into general-purpose software and applications.

General-purpose software
Examples include
§  Word processing – Create, edit and print text documents. E.g. MS Word, Word Perfect.
  • Spreadsheets – Provide a wide range of built-in functions for statistical, logical, financial, database, graphics, data and time calculations. E.g. Lotus 1-2-3, Excel, Quattro Pro.
  • Database management systems (DBMS) – Store, manipulate and retrieve data. E.g. Access, FoxPro, dBase.
  • Online Information Services – Obtain a broad range of information from commercial services. E.g. America Online, CompuServe
  • Communications- Ms Outlook for email
  • Browsers e.g Internet Explorer, Eudora
  • Graphics – Develop graphs, illustrations and drawings. E.g. PaintShop, FreeHand, Corel
  • Project Management – Plan, schedule, allocate and control people and resources needed to complete a project according to schedule.  E.g. Project for Windows, Time Line.
  • Financial Management – Provide income and expense tracking and reporting to monitor and plan budgets. E.g. Quicken
  • Desktop publishing -used to create high-quality printed output including text and graphics; various styles of pages can be laid out; art and text from other programs can also be integrated into published pages. E.g. PageMaker, Publisher.
  • Presentation packages like MS Powerpoint

Note: A software suite, such as Microsoft Office, offers a collection of powerful programs including word processing, spreadsheet, database, graphics and other programs. The programs in a software suite are designed to be used together. In addition, the commands, the icons and procedures are the same for all programs in the suite.

Programming Languages
Programming languages are collections of commands, statements and words that are combined using a particular syntax, or rules, to write both systems and application software.  This results in meaningful instructions to the CPU.

Generations of programming languages
Machine Language (1st Generation Languages)
A machine language consists of binary digit, that is, zeroes and ones. Instructions and addresses are written in binary (0,1) code. Binary is the only “language” a CPU can understand.  The CPU directly interprets and executes this language, therefore making it fast in execution of its instructions. Machine language programs directly instructed the computer hardware, so they were not portable.  That is, a program written for computer model A could not be run on computer model B without being rewritten. All software in other languages must ultimately be translated down to machine language form. The translation process makes the other languages slower.

Advantage
·         The only advantage is that program of machine language run very fast because no translation program is required for the CPU.

Disadvantages
·         It is very difficult to program in machine language. The programmer has to know details of hardware to write program.
·         The programmer has to remember a lot of codes to write a program, which results in program errors.
·         It is difficult to debug the program.


Assembly Language (2nd Generation languages)
Uses symbols and codes instead of binary digits to represent program instructions. It is a symbolic language meaning that instructions and addresses are written using alphanumeric labels, meaningful to the programmer.

The resulting programs still directly instructed the computer hardware.  For example, an assembly language instruction might move a piece of data stored at a particular location in RAM into a particular location on the CPU.  Therefore, like their first generation counterparts, second generation programs were not easily portable. 

Assembly languages were designed to run in a small amount of RAM. Furthermore, they are low-level languages; that is the instructions directly manipulate the hardware.  Therefore, programs written in assembly language execute efficiently and quickly.  As a result, more systems software is still written using assembly languages.

The language has a one to one mapping with machine instructions but has macros added to it. A macro is a group of multiple machine instructions, which are considered as one instruction in assembly language. A macro performs a specific task, for example adding, subtracting etc. A one to one mapping means that for every assembly instruction there is a corresponding single or multiple instructions in machine language.

An assembler is used to translate the assembly language statements into machine language.

Advantages:
  • The symbolic programming of Assembly Language is easier to understand and saves a lot of time and effort of the programmer.
  • It is easier to correct errors and modify program instructions.
  • Assembly Language has the same efficiency of execution as the machine level language. Because this is one-to-one translator between assembly language program and its corresponding machine language program.

Disadvantages:
  • One of the major disadvantages is that assembly language is machine dependent. A program written for one computer might not run in other computers with different hardware configuration.


High-level languages (3rd generation languages)
Third generation languages are easier to learn and use than were earlier generations. Thus programmers are more productive when using third generation languages. For most applications, this increased productivity compensates for the decrease in speed and efficiency of the resulting programs.  Furthermore, programs written in third generation languages are portable; that is, a program written to run on a particular type of computer can be run with little or no modification on another type of computer. Portability is possible because third generation languages are “high-level languages”; that is instructions do not directly manipulate the computer hardware.

Third generation languages are sometimes referred to as “procedural” languages since program instructions, must still the computer detailed instructions of how to reach the desired result.

High-level languages incorporated greater use of symbolic code. Its statements are more English –like, for example print, get, while. They are easier to learn but the resulting program is slower in execution. Examples include Basic, Cobol, C and Fortran. They have first to be compiled (translated into corresponding machine language statements) through the use of compilers.
 
Advantages of High Level Languages
  • Higher-level languages have a major advantage over machine and assembly languages that higher-level languages are easy to learn and use.
  • Are portable

Fourth Generation Languages (4GLs)
Fourth generation languages are even easier to use, and more English-like, than are third generation languages.  Fourth generation languages are sometimes referred to as “non-procedural”, since programs tell the computer what it needs to accomplish, but do not provide detailed instructions as to how it should accomplish it. Since fourth generation languages concentrate on the output, not procedural details, they are more easily used by people who are not computer specialists, that is, by end users.

Many of the first fourth generation languages were connected with particular database management systems.  These languages were called query languages since they allow people to retrieve information from databases.  Structured query language, SQL, is a current fourth generation language used to access many databases. There are also some statistical fourth generation languages, such as SAS or SPSS.

Some fourth generation languages, such as Visual C++, Visual Basic, or PowerBuilder are targeted to more knowledgeable users, since they are more complex to use. Visual programming languages, such as visual basic, use windows, icons, and pull down menus to make programming easier and more intuitive.

Object Oriented Programming
First, second, third and fourth generation programming languages were used to construct programs that contained procedures to perform operations, such as draw or display, on data elements defined in a file. 

Object oriented programs consist of objects, such as a time card, that include descriptions of the data relevant to the object, as well as the operations that can be done on that data. For example, included in the time card object, would be descriptions of such data such as employee name, hourly rate, start time, end time, and so on.  The time card object would also contain descriptions of such operations as calculate total hours worked or calculate total pay.


Language translators
Although machine language is the only language the CPU understands, it is rarely used anymore since it is so difficult to use.  Every program that is not written in machine language must be translated into machine language before it can be executed.  This is done by a category of system software called language translation software.  These are programs that convert the code originally written by the programmer, called source code, into its equivalent machine language program, called object code.
There are two main types of language translators:  interpreters and compilers.

Interpreters
While a program is running, interpreters read, translate, and execute one statement of the program at a time. The interpreter displays any errors immediately on the monitor.  Interpreters are very useful for people learning how to program or debugging a program. However, the line-by-line translation adds significant overhead to the program execution time leading to slow execution.


Compilers
A compiler uses a language translation program that converts the entire source program into object code, known as an object module, at one time. The object module is stored and it is the object module that executes when the program runs.  The program does not have to be compiled again until changes are made in the source code.

Software trends and issues
Open source software coming to the scene. This is software that is freely available to anyone and can be easily modified.  The use of open source software has increased dramatically due to the World Wide Web.  Users can download the source code from web sites.  Open source software is often more reliable than commercial software because there are many users collaborating to fix problems. The biggest problem with open source software is the lack of formal technical support. However, some companies that package open source software with various add-ons and sell it with support are addressing this.  An example of this is Red Hat Linux operating system.
9. Data resources
Data
Data the raw material for information is defined as groups of non-random symbols that represent quantities, actions, objects etc. In information systems data items are formed from characters that may be alphabetical, numeric, or special symbols. Data items are organized for processing purposes into data structures, file structures and databases. Data relevant to information processing and decision-making may also be in the form of text, images or voice.

Information
Information is data that has been processed into a form that is meaningful to the recipient and is of real or perceived value in current or prospective actions or decisions. It is important to note that data for one level of an information system may be information for another. For example, data input to the management level is information output of a lower level of the system such as operations level. Information resources are reusable. When retrieved and used it does not lose value: it may indeed gain value through the credibility added by use.

The value of information is described most meaningfully in the context of a decision. If there were no current or future choices or decisions, information would be unnecessary. The value of information in decision-making is the value of change in decision behaviour caused by the information less the cost of obtaining the information. Decisions however are usually made without the “right” information. The reasons are:

  • The needed information is unavailable
  • The effort to acquire the information is too great or too costly.
  • There is no knowledge of the availability of the information.
  • The information is not available in the form needed.

Much of the information that organizations or individuals prepare has value other than in decision-making. The information may also be prepared for motivation and background building.

Desirable qualities of information
  • Availability – Information should be available and accessible to those who need it.
  • Comprehensible – Information should be understandable to those who use it.
  • Relevance – Information should be applicable to the situations and performance of organizational functions. Relevant information is important to the decision maker.
  • Secure – Information should be secure from access by unauthorized users.
  • Usefulness – Information should be available in a form that is usable.
  • Timeliness - Information should be available when it is needed.
  • Reliability – Reliable information can be depended on. In many cases, reliability of information depends on the reliability of the data collection method. In other instances, reliability depends on the source of information.
  • Accuracy – Information should be correct, precise and without error. In some cases inaccurate information is generated because inaccurate data is fed into the transformation process (this is commonly called garbage in garbage out, GIGO).
  • Consistency– Information should not be self-contradictory.
  • Completeness – Complete information contains all the important facts. For example an investment report that does not contain all the costs is not complete.
  • Economical – Information should always be relatively economical to produce. Decision makers must always balance the value of information and the cost of producing it.
  • Flexibility – Flexible information can be used for a variety of purposes.

Data Processing
Data processing may be defined as those activities, which are concerned with the systematic recording, arranging, filing, processing and dissemination of facts relating to the physical events occurring in the business. Data processing can also be described as the activity of manipulating the raw facts to generate a set or an assembly of meaningful data, what is described as information. Data processing activities include data collection, classification, sorting, adding, merging, summarizing, storing, retrieval and dissemination.

The black box model is an extremely simple principle of a machine, that is, irrespective of how a machine operates internally any machine takes an input, operates on it and then produces an output.




    Input                                                                                      Output                                                                                                                                            


In dealing with digital computers this data consists of: numerical data, character data and special (control) characters.

Use of computers for data processing involves four stages:
  • Data input – This is the process of data capture into the computer system for processing. Input devices are used.
  • Storage – This is an intermediary stage where input data is stored within the computer system or on secondary storage awaiting processing or output after processing. Program instructions to operate on the data are also stored in the computer.
  • Processing – The central processing unit of the computer manipulates data using arithmetic and logical operations.
  • Data output – The results of the processing function are output by the computer using a variety of output devices.

Data processing activities
The basic processing activities include:
  • Record – bring facts into a processing system in usable form
  • Classify – data with similar characteristics are placed in the same category, or group.
  • Sort – arrangement of data items in a desired sequence
  • Calculate – apply arithmetic functions to data
  • Summarize – to condense data or to put it in a briefer form
  • Compare – perform an evaluation in relation to some known measures
  • Communicate – the process of sharing information
  • Store – to hold processed data for continuing or later use.
  • Retrieve – to recover data previously stored


Information processing
This is the process of turning data into information by making it useful to some person or process.
Computer files
A file is a collection of related data or information that is normally maintained on a secondary storage device. The purpose of a file is to keep data in a convenient location where they can be located and retrieved as needed. The term computer file suggests organized retention on the computer that facilitates rapid, convenient storage and retrieval.

As defined by their functions, two general types of files are used in computer information systems: master files and transaction files.

Master files
Master files contain information to be retained over a relatively long time period. Information in master files is updated continuously to represent the current status of the business.

An example is an accounts receivable file. This file is maintained by companies that sell to customers on credit. Each account record will contain such information as account number, customer name and address, credit limit amount, the current balance owed, and fields indicating the dates and amounts of purchases during the current reporting period. This file is updated each time the customer makes a purchase. When a new purchase is made, a new account balance is computed and compared with the credit limit. If the new balance exceeds the credit limit, an exception report may be issued and the order may be held up pending management approval.
Transaction files
Transaction files contain records reflecting current business activities. Records in transaction files are used to update master files.

To continue with the illustration, records containing data on customer orders are entered into transaction files. These transaction files are then processed to update the master files. This is known as posting transaction data to master file. For each customer transaction record, the corresponding master record is accessed and updated to reflect the last transaction and the new balance. At this point, the master file is said to be current.
 
Accessing Files
Files can be accessed
  • Sequentially - start at first record and read one record after another until end of file or desired record is found
    • known as “sequential access”
    • only possible access for serial storage devices
  • Directly - read desired record directly
    • known as “random access” or “direct access”

File Organization
Files need to be properly arranged and organised to facilitate easy access and retrieval of the information. Types of file organisation (physical method of storage) include:
  • Serial
  • Sequential
  • Indexed-Sequential
  • Random
All file organisation types apply to direct access storage media (disk, drum etc.)
A file on a serial storage media (e.g. tape) can only be organised serially

Serial Organization
  • Each record is placed in turn in the next available storage space
  • A serial file must be accessed sequentially implying
    • good use of space
    • high access time
  • Usually used for temporary files, e.g. transaction files, work files, spool files
    Note: The method of accessing the data on the file is different to its organisation
    • E.g. sequential access of a randomly organised file
    • E.g. direct access of a sequential file

Sequential organization
§  Records are organised in ascending sequence according to a certain key
§  Sequential files are accessed sequentially, one record after the next
§  Suitable
    • for master files in a batch processing environment
    • where a large percentage of records (high hit-rate) are to be accessed
§  Not suitable for online access requiring a fast response as file needs to be accessed sequentially

Indexed Sequential
§  Most commonly used methods of file organisation
§  File is organised sequentially and contains an index
§  Used on direct access devices
§  Used in applications that require sequential processing of large numbers of records but occasional direct access of individual records
§  Increases processing overheads with maintenance of the indices

Random organization
  • Records are stored in a specific location determined by a randomising algorithm
    • function (key) = record location (address)
  • Records can be accessed directly without regard to physical location
  • Used to provide fast access to any individual record
    e.g. airline reservations, online banking

 
Problems of traditional file based approach
Each function in an organisation develops specific applications in isolation from other divisions, each application using their own data files. This leads to the following problems:

  • Data redundancy
    • duplicate data in multiple data files
  • Redundancy leads to inconsistencies
    • in data representation e.g. refer to the same person as client or customer
    • values of data items across multiple files
  • Data isolation — multiple files and formats
  • Program-data dependence
    • tight relationship between data files and specific programs used to maintain files
  • Lack of flexibility
    • Need to write a new program to carry out each new task
  • Lack of data sharing and availability
  • Integrity problems
    • Integrity constraints  (e.g. account balance > 0) become part of program code
    • Hard to add new constraints or change existing ones
  • Concurrent access by multiple users difficult
    • Concurrent accessed needed for performance
    • Uncontrolled concurrent accesses can lead to inconsistencies
    • E.g. two people reading a balance and updating it at the same time
  • Security problems


Data files and databases
A data file is a structured collection of data (information). The data is related in some manner. It is organized so that relationships within the data are revealed (or revealable). A data file stores several (many) pieces of information about many data objects. The simplest and most efficient metaphor of how data is organized in a data file is as a table of rows and columns, like a spreadsheet but without the linkages between individual cells. A data file is made up of a number of records; each row in a table is a separate record. Each record is made up of all the data about a particular entity in the file.

A record includes many data items, each of which is a separate cell in the table. Each column in the table is a field; it is a set of values for a particular variable, and is made up of all the data items for that variable. Examples include phone book, library catalogue, hospital patient records, and species information.

A database is an organized collection of (one or more) related data file(s). The way the database organizes data depends on the type of database, called its data model, which, may be hierarchical, network and relational models.



Benefits of the database approach
§  Provide Data Independence
o   separating the physical (how) & logical (what) aspects of the system
§  Physical data independence
o   protects the application programs from changes in the physical placement, of the files
o   the ability to modify the physical schema without changing the logical schema
§  Logical data independence
o   Modify logical schema without changing application programs
§  Reduce redundancy
o   reduce duplicate data items
o   some redundancy may be necessary for business or technical reasons - DBA must ensure updates are propagated (a change to one is automatically applied to the other)
§  Avoid inconsistency (by reducing redundancy)
o   if it is necessary - propagate updates
§  Maintain integrity - i.e. ensure the data is accurate by
o   reducing redundancy
o   implementing integrity rules, e.g. through foreign keys
§  Share data
o   among existing applications
o   used in new applications
§  Allow implementation of security restrictions
o   establish rules for different types of user for different types of update to database
§  Enforce standards for
o   data representation - useful for migrating data between systems
o   data naming & documentation  - aids data sharing & understandability
§  Balance conflicting requirements
o   structure the corporate data in a way that is best for the organisation

Database Management Systems (DBMS)
DBMSs are system software that aid in organizing, controlling and using the data needed by application programs. A DBMS provides the facility to create and maintain a well-organized database. It also provides functions such as normalization to reduce data redundancy, decrease access time and establish basic security measures over sensitive data.

DBMS can control user access at the following levels:
¨       User and the database
¨       Program and the database
¨       Transaction and the database
¨       Program and data field
¨       User and transaction
¨       User and data field

The following are some of the advantages of DBMS:
  • Data independence for application systems
  • Ease of support and flexibility in meeting changing data requirements
  • Transaction processing efficiency
  • Reduction of data redundancy (similar data being held at more than one point – utilizes more resources) – have one copy of the data and avail it to all users and applications
  • Maximizes data consistency – users have same view of data even after an update
  • Minimizes maintenance cost through data sharing
  • Opportunity to enforce data/programming standards
  • Opportunity to enforce data security
  • Availability of stored data integrity checks
  • Facilitates terminal users ad hoc access to data, especially designed query languages/application generators

Most DBMS have internal security features that interface with the operating system access control mechanism/package, unless it was implemented in a raw device. A combination of the DBMS security features and security package functions is often used to cover all required security functions. This dual security approach however introduces complexity and opportunity for security lapses.
 
 
DBMS architecture
Data elements required to define a database are called metadata. There are three types of metadata: conceptual schema metadata, external schema metadata and internal schema metadata. If any one of these elements is missing from the data definition maintained within the DBMS, the DBMS may not be adequate to meet users’ needs. A data definition language (DDL) is a component used for creating the schema representation necessary for interpreting and responding to the users’ requests.

Data dictionary and directory systems (DD/DS) have been developed to define and store in source and object forms all data definitions for external schemas, conceptual schemas, the internal schema and all associated mappings. The data dictionary contains an index and description of all the items stored in the database. The directory describes the location of the data and access method. Some of the benefits of using DD/DS include:

  • Enhancing documentation
  • Providing common validation criteria
  • Facilitating programming by reducing the needs for data definition
  • Standardizing programming methods

Database structure
The common database models are:
  • Hierarchical database model
  • Network database model
  • Relational database model
  • Object–oriented model



Hierarchical database model
This model allows the data to be structured in a parent/child relationship (each parent may have many children, but each child would be restricted to having only one parent). Under this model, it is difficult to express relationships when children need to relate to more than one parent. When the data relationships are hierarchical, the database is easy to implement, modify and search.

A hierarchical structure has only one root. Each parent can have numerous children, but a child can have only one parent. Subordinate segments are retrieved through the parent segment. Reverse pointers are not allowed. Pointers can be set only for nodes on a lower level; they cannot be set to a node on a predetermined access path.


 














Network Database Model
The model allows children to relate to more than one parent. A disadvantage to the network model is that such structure can be extremely complex and difficult to comprehend, modify or reconstruct in case of failure. The network structure is effective in stable environments where the complex interdependencies of the requirements have been clearly defined.

The network structure is more flexible, yet more complex, than the hierarchical structure. Data records are related through logical entities called sets. Within a network, any data element can be connected to any item. Because networks allow reverse pointers, an item can be an owner and a member of the same set of data. Members are grouped together to form records, and records are linked together to form a set. A set can have only one owner record but several member records.














Relational Database Model
The model is independent from the physical implementation of the data structure. The relational database organization has many advantages over the hierarchical and network database models. They are:
  • Easier for users to understand and implement in a physical database system
  • Easier to convert from other database structures
  • Projection and joint operations (referencing groups of related data elements not stored together) are easier to implement and creation of new relations for applications is easier to do.
  • Access control over sensitive data is easy to implement
  • Faster in data search
  • Easier to modify than hierarchical or network structures

Relational database technology separates data from the application and uses a simplified data model. Based on set theory and relational calculations, a relational database models information in a table structure with columns and rows. Columns, called domains or attributes, correspond to fields. Rows or tuples are equal to records in a conventional file structure. Relational databases use normalization rules to minimize the amount of information needed in tables to satisfy users’ structured and unstructured queries to the database.
 













Database administrator
Coordinates the activities of the database system. Duties include:
§  Schema definition
§  Storage structure and access method definition
§  Schema and physical organisation modification
§  Granting user authority to access the database
§  Specifying integrity constraints
§  Acting as liaison with users
§  Monitoring performance and responding to changes in requirements
§  Security definitions

 
Database Security, Integrity and Control
Security is the protection of data from accidental or deliberate threats, which might cause unauthorized modification, disclosure or destruction of data and the protection of the information system from the degradation or non-availability of service. Data integrity in the context of security is when data are the same as in source documents and have not been accidentally or intentionally altered, destroyed or disclosed. Security in database systems is important because:

  • Large volumes of data are concentrated into files that are physically very small
  • The processing capabilities of a computer are extensive, and enormous quantities of data are processed without human intervention.
  • Easy to lose data in a database from equipment malfunction, corrupt files, loss during copying of files and data files are susceptible to theft, floods etc.
  • Unauthorized people can gain access to data files and read classified data on files
  • Information on a computer file can be changed without leaving any physical trace of change
  • Database systems are critical in competitive advantage to an organization

Some of the controls that can be put in place include:
1)      Administrative controls – controls by non-computer based measures. They include:
a.      Personnel controls e.g. selection of personnel and division of responsibilities
b.      Secure positioning of equipment
c.       Physical access controls
d.      Building controls
e.       Contingency plans
2)      PC controls
a.      Keyboard lock
b.      Password
c.       Locking disks
d.      Training
e.       Virus scanning
f.        Policies and procedures on software copying
3)      Database controls – a number of controls have been embedded into DBMS, these include:
a.      Authorization – granting of privileges and ownership, authentication
b.      Provision of different views for different categories of users
c.       Backup and recovery procedures
d.      Checkpoints – the point of synchronization between database and transaction log files. All buffers are force written to storage.
e.       Integrity checks e.g. relationships, lookup tables, validations
f.        Encryption – coding of data by special algorithm that renders them unreadable without decryption
g.      Journaling – maintaining log files of all changes made
h.      Database repair
4)      Development controls – when a database is being developed, there should be controls over the design, development and testing e.g.
a.      Testing
b.      Formal technical review
c.       Control over changes
d.      Controls over file conversion
5)      Document standards – standards are required for documentation such as:
a.      Requirement specification
b.      Program specification
c.       Operations manual
d.      User manual
6)      Legal issues
a.      Escrow agreements – legal contracts concerning software
b.      Maintenance agreements
c.       Copyrights
d.      Licenses
e.       Privacy
7)      Other controls including
a.      Hardware controls such as device interlocks which prevent input or output of data from being interrupted or terminated, once begun
b.      Data communication controls e.g. error detection and correction.
Database recovery is the process of restoring the database to a correct state in the event of a failure.
 
Some of the techniques include:
1)      Backups
2)      Mirroring – two complete copies of the database are maintained online on different stable storage devices.
3)      Restart procedures – no transactions are accepted until the database has been repaired
4)      Undo/redo – undoing and redoing a transaction after failure.

A distributed database system exists where logically related data is physically distributed between a number of separate processors linked by a communication network.

A multidatabase system is a distributed system designed to integrate data and provide access to a collection of pre-existing local databases managed by heterogeneous database systems such as oracle.

10. Terminology
Multiprogramming
Multiprogramming is a rudimentary form of parallel processing in which several programs are run at the same time on a uniprocessor. Since there is only one processor, there can be no true simultaneous execution of different programs. Instead, the operating system executes part of one program, then part of another, and so on. To the user it appears that all programs are executing at the same time.

Multiprocessing
Multiprocessing is the coordinated (simultaneous execution) processing of programs by more than one computer processor. Multiprocessing is a general term that can mean the dynamic assignment of a program to one of two or more computers working in tandem or can involve multiple computers working on the same program at the same time (in parallel).

Multitasking
In a computer operating system, multitasking is allowing a user to perform more than one computer task (such as the operation of an application program) at a time. The operating system is able to keep track of where you are in these tasks and go from one to the other without losing information. Microsoft Windows 2000, IBM's OS/390, and Linux are examples of operating systems that can do multitasking (almost all of today's operating systems can). When you open your Web browser and then open word at the same time, you are causing the operating system to do multitasking.

Multithreading
It is easy to confuse multithreading with multitasking or multiprogramming, which are somewhat different ideas.

Multithreading is the ability of a program or an operating system process to manage its use by more than one user at a time and to even manage multiple requests by the same user without having to have multiple copies of the programming running in the computer



REINFORCING QUESTIONS
QUESTION ONE
(a) Name the major components of a computer and the function of each.                                                                                                                                               (6 Marks)
(b) What are the four different types of semiconductor memory and where are they used?                                                                                                                   (8 Marks)
(c) Distinguish between serial, parallel and massively parallel processing.           (6 Marks)
                                                                                                (Total: 20 marks)

QUESTION TWO
(a) List the most important secondary storage media. What are the strengths and limitations of each?                                                                                                 (10 Marks)
(b) What is the difference between batch and on-line processing?                                                                                                                                                             (5 Marks)
(c) What is multimedia? What technologies are involved?                                                                                                                                                                         (5 Marks)
                                                                                                (Total: 20 marks)

QUESTION THREE
(a) List and describe the major input devices.                                             (7 Marks)
(b) List and describe the major output devices.                                           (7 Marks)
(c) What are downsizing and cooperative processing as used in computer technology?                                                                                                             (6 Marks)
                                                                                                (Total: 20 marks)

QUESTION FOUR
(a) What are the major types of software? How do they differ in terms of users and uses?                                                                                                                    (4 marks)
(b) Describe multiprogramming, virtual storage, time-sharing, and multiprocessing. Why are they important for the operation of an information system?      (16 Marks)
                                                                                                (Total: 20 marks)

QUESTION FIVE
(a) What is multiprogramming as applied to a computer system. Explain two major advantages of multiprogramming.                                                               (6 marks)
(May 2002 Question 3d)
(b) Differentiate between data and information.                                         (2 marks)
(c) Discuss how virtual memory concept is implemented indicating its key objective.
(6 marks)
                                                                                                (May 2002 Question 6b)
(d) A software engineer requires a range of software utilities. Explain the usefulness of any three such utilities.                                                                                    (6 marks)
                                                                                                (May 2002 Question 1b)
                                                                                    (Total: 20 marks)

CHECK YOUR ANSWERS WITH THOSE GIVEN IN LESSON 9 OF THE STUDY PACK




SYSTEMS THEORY AND ORGANIZATIONS


CONTENTS
1.      System concepts
1.1.   System characteristics
1.2.   Classification of systems
1.3.   Components of systems

2.      Objectives and applications of systems approach
2.1.   Systems theory concepts

3.      Organizations
3.1.   Introduction to various organizational theories
3.1.1.      Classical or empirical approach
3.1.1.1.Scientific management or Taylorism
3.1.1.2.Departmental approach
3.1.1.3.Weber’s Bureaucratic organization
3.1.1.4.Human relation school
3.1.1.5.System contingency approach
3.2.   Organizational structures
3.3.   The organizational hierarchy




INSTRUCTIONS

1.      Additional Reading: Chapter 3 of the study.
2.      Complete the Reinforcing Questions at the end of the lesson.
3.      Compare your answers to the models given in the revision section of the study pack.
4.      Answer the comprehensive assignment at the end of the lesson and submit it to Distance Learning Centre, Strathmore University for marking.

1. Systems concepts
A system is a set of interacting components that work together to accomplish specific goals. For example, a business is organized to accomplish a set of specific functions. Any situations, which involve the handling or manipulation of materials or resources of any kind whether human, financial or informative, may be structured and represented in the form of a system.
 
1.1 Characteristics of System
a)      Purpose – Systems exist to fulfil some objective or satisfy a need. A system may accomplish more than one task. The purpose of a system is closely tied to its rationale.
b)     Rationale – This is the justification for a system’s existence.
c)      Efficiency – This is how well a system utilizes its resources, that is, doing things right.
d)     Effectiveness – How well a system fulfils its purpose, assuming that its purpose is the right one. Involves a system doing the right things.
e)      Inputs – Entities that enter the system to produce output or furnish information.
f)       Outputs – Entities that exit from the system either as interfaces or for end-user activities. They may be used to evaluate system’s efficiency and effectiveness.
g)      Transformation rules – Specify how the input is processed to produce output.
h)     Throughput – Measures the quantity of work a system accomplishes. Does not consider the quality of the output.
i)       Boundary – Artificially delimits a system for study or discussion purposes. System designers can only control those system components within the boundary.
j)       Environment – That which impacts the system but is outside the system’s boundary. The system cannot control events in the environment.
k)     Interfaces – Points where two systems meet and share inputs and outputs. Interfaces belong to the environment although they may be inside the system boundary.
l)       Feedback – Recycles outputs as subsequent inputs, or measures outputs to assess effectiveness.



1.2 Classification of systems
Each system can be characterized along a wide range of various characteristics. 

Physical systems Vs Abstract systems
A physical system consists of a set of elements, which are coordinated and operate as a whole entity to achieve a certain objective. This system may also be called a concrete system.
An abstract system is an orderly arrangement of conceptual items or components.

Simple systems Vs Complex systems
A simple system has few components, and the relationship or interaction between elements is uncomplicated and straightforward.

A complex system has many elements that are highly related and interconnected.

Open systems Vs Closed systems
An open system interacts with its environment. It is a system with a feedback mechanism that promotes the free exchange of information between the system and the external entities. Organizations are open systems.

A closed system has no interaction with the environment. This is a system that neither transmits information to the outside world nor receives any information from the outside world. It is mainly a scientific concept (e.g. physics experiments).
 
 
Open loop systems Vs closed loop systems
An open-loop system is a system, which does not act in a controlled manner, that is, there is no feedback loop, and so no measure of performance against standards.

A closed-loop system is a system that functions in a controlled manner, such a system accepts inputs, works upon them according to some predefined processing rules and produces outputs. Such a system is controlled via a feedback loop.


Stable/Static systems Vs Dynamic systems
A stable system undergoes very little change over time. A dynamic system undergoes rapid and constant change over time.                 


Adaptive systems Vs Non-adaptive systems
An adaptive system is able to change in response to changes in the environment. These systems can also be described as cybernetic or self-organizing systems.
A non-adaptive system is not able to change in response to changes in the environment.
 
Deterministic systems Vs Probabilistic systems
Deterministic systems operate in a predictable manner. For example, thermostats and computer programs. In probabilistic systems however, it is not possible to determine the next state of the system. These systems depend on probability distribution.

The system even if the current state is known. For example, a doctor’s diagnostic system.

Permanent systems Vs Temporary systems
A permanent system exists for a relatively long period of time.
A temporary system exists for only a relatively short period of time.


1.3 Components of systems
                                                            Environment              System Boundary
 
                        Environment                                                              Environment                               Input                                                                                                 Output
                                                                                                             

                                                            Environment              Interacting subsystems                                                                                                                     
Inputs
These provide the system with what it needs to operate. It may include machines, manpower, raw materials, money or time.

Processes
Include policies, procedures, and operations that convert inputs into outputs.

Outputs
These are the results of processing and may include information in the right format, conveyed at the right time and place, to the right person.

Systems Boundary
A system boundary defines the system and distinguishes it from its environment.

Subsystems
A subsystem is a unit within a system that shares some or all of the characteristics of that system. Subsystems are smaller systems that make up a super-system / supra-system. All systems are part of larger systems
.
Environment
This is the world surrounding the system, which the system is a subsystem of.

Objectives and application of systems approach

Features of systems Theory
1.      All systems are composed of inter-related parts or sub-systems and the system can only be explained as a whole.  This is known as holism or synergy. The systems view is that the whole is more than just some of the parts and those vital interrelationships will be ignored and misunderstood if the separate parts are studied in isolation.

2.      Systems are hierarchical, that is, the parts and sub-systems are made up of other smaller parts.  For example, a payroll system is a subsystem of the Accounting System, which is a sub of the whole organisation. One system is a sub of another.
3.      The parts of a system constitute an indissoluble whole so that no part can be altered without affecting other parts.  Many organisational problems arise once this principle is flouted or ignored.  Changes to one department could create untold adverse effects on others - ripple effects: e.g. changing a procedure in one department could affect others e.g. admissions - faculty  … type of data captured, process
4.      The sub-systems should work towards the goals of their higher systems and should not pursue their own objectives independently. When subsystems pursue their own objectives, a condition of sub-optimality arises, and with this the falling of the organisation is close at hand!

Information systems designers should seek to avoid the sub-optimality problem!

5.      Organisational systems contain both hard and soft properties. Hard properties are those that can be assessed in some objective way e.g. the amount of PAYE tax with tax code, size of product- quantifiable

Soft properties - constitute individual taste.  They cannot be assessed by any objective standard or measuring process e.g. appearance of a product, suitability of a person for job and any problem containing a political element.


Importance of systems theory:
a)      It provides a theoretical framework for study of performance of businesses
b)      It stresses the fact that all organizations are made up of subsystems, which must work together harmoniously in order that goals of the overall system can be achieved.
c)      It recognizes the fact that conflicts can arise within a system, and that such conflicts can lead to sub-optimization and that, ultimately, can even mean that an organization does not achieve its goals.
d)     It allows the individual to recognize that he/she is a subsystem within a larger system, and that the considerations of systems concept apply to him/her, also.
e)      Given the above factors, it is clear that information-producing systems must be designed to support the goals of the total system, and that this must be borne in mind throughout their development.



Systems theory concepts

  • Entropy – The tendency towards disorder (chaos) in a system. The more closed a system is, the greater the entropy.

  • Feedback – This is a control mechanism in open systems. Feedback involves measuring the output of the system, comparing the output with a standard and using any difference to modify subsequent input to ensure that the outputs conforms to the required standard.

 
Input                                                                              Output

                                                                                                                 
Effector                                   Feedback Loop                Sensor

                                                            Comparator

Elements of control include:

-          Goal: This is the expected performance, plan or results.
-          Sensor: Measures actual performance.
-          Comparator: Compares expected results to actual results obtained.
-          Effector: Reports deviation and initiates the response which may lead to a redirection of activity, revision of the expectation or changing the plan.

  • Feed-forward – It means to take steps to make some adjustments to the system in advance in order to face any expected deviations in future. Feedback monitors the past results whereas feed-forward deals with future outcomes.
  • Functional Decomposition – This involves factoring a system to its constituent subsystems. The subsystems are also decomposed further into manageable sizes resulting in a hierarchy structure of a system. Decomposition is used to analyse the existing system, to design and finally implement a new system.
  • Functional cohesion – Involves dividing into subsystems by grouping activities that logically go together.
  • Coupling – Occurs when two systems are highly interrelated.
  • Decoupling – This is a process in which the subsystems are given autonomy and independence. The subsystems operate independently thereby pursuing own objectives and enhancing flexibility.
  • Synergy – The whole is greater than the sum of its parts. At this point the focus is on global system needs, not local issues. It means that more than one system working together produce more result than each would achieve independently.
  • Optimization – It is possible to achieve a best solution.
  • Sub-optimization – It is an occurrence that occurs when the objectives of one element or subsystem conflicts with the objectives of the whole system.
  • Equifinality – Certain results may be achieved with different initial conditions and in different ways. In open systems the same final state can be reached from several starting points, one result can have different causes, or through different methods, there is more than one way to achieve the objective.
  • Goal-seeking – systems attempt to stabilize at a certain point.
  • Holism – the analysis of a system is considered from the point of view of the whole system and not on individual subsystems. Subsystems are studied in the context of the entire system.

3. Organizations
An organization is a group created and maintained to achieve specific objectives.
¨       A hospital with objectives dealing with human care.
¨       A local authority with objectives concerned with providing services to the local community.
¨       A commercial company with objectives including earning profits, providing a return for shareholders and so on.

Features that describe organizations would be accepted by most people.
¨       Goal oriented i.e. people with a purpose.
¨       Social systems i.e. people working in groups.
¨       Technical systems i.e. people using knowledge, techniques and machines.
¨       The integration of structured activities i.e. people co-coordinating their efforts.

3.1 Organizational theories
Organizational theory is the body of knowledge relating to the philosophical basis of the structure, functioning and performance of organizations. Such theory is derived from historical schools of thought stating the point of view of a number of early pioneers of management. A broad chronological sequence of the three main schools of thought which have contributed to an understanding of the nature of organizations and management is shown in the figure below:















Classical or Empirical Approach
Also known as the traditional approach. The classical or management process approach to management was evolved in the early part of the 20th century. This theory is based on contributions from a number of sources including:

·         Scientific management (from Taylor, Gant, Gilbert and others)
·         Administrative Management Theorists (Fayol, Urwick, Brech and others)
·         Bureaucracy / Academics (notably from Weber)

Whilst not completely ignoring the behavioural aspects of organization, the traditional emphasis was on the structure of organizations, the management of structures and control of production methods. All organizations were treated similarly and there was a search for universal principles, which could be applied to any organization and on the whole took a relatively mechanistic view of organizations with a tendency to treat them as closed systems.

(i)                 Scientific management or Taylorism
In the late 1890’s, Fredrick Taylor introduced the concept of scientific management. Taylor’s approach focused on the effective use of human beings in organizations; it is a rational engineering approach of work based on time and motion studies. His pioneering work was refined and developed by other workers. There are four main principles to scientific management:

a)      Develop the best or ideal method of doing a task and determine a standard scientifically. The worker should be paid an incentive for exceeding this standard.
b)      Select the best person for the job and train him or her in the way to accomplish the task.
c)      Combine the scientific method with well selected and trained people (workers will not resist improved method, since they receive more money because of them).
d)     Take all responsibility for planning and give it to the management. The worker is only responsible for the actual job performance. Scientific management and intensive study of the activities of individual employees answered many questions on human engineering.


Benefits and drawbacks of scientific management

Benefits
a)      The improvement in working methods resulted in enormous gains in productivity.
b)      The measurement and analysis of tasks provided factual information on which to base improvements in methods and equipment.
c)      It provided a rational basis for piecework and incentive schemes, which became more widely used.
d)     There were considerable improvements in working conditions.
e)      Management became more involved with production activities and was thus encouraged to show positive leadership.

Drawbacks
a)      Jobs became more boring and repetitive.
b)      Planning, design and control became divorced from performance thus de-skilling tasks.
c)      Workers become more virtual adjuncts to machines with management having a monopoly of knowledge and control.
d)     De-skilling, excessive specialization, repetition and so on, causes workers to become alienated and frustrated.


(ii)                    Departmental Approach
A number of theorists, including Gulick, Urwick and Fayol have described organizations based on groupings of various activities into departments. These theorists looked at the organization as divided into departments. These theorists looked at the organization as a giant machine and tried to develop principles or universal laws that govern the machine’s activities. The general problem addressed in their writing is that given an organization, how do you identify the unit tasks and how do you organize these tasks into the individual jobs? Then how are the jobs organized into administrative units, and finally how are top-level departments established? The result of this analysis is the structuring of departments within the organization, each department contains a set of tasks to be performed by workers in that department.

Example:
-          Finance department: For providing funds and ensuring effective use.
-          Production department: Provides and maintains equipment to convert raw materials into finished products and control of the production process.
-          Marketing department
-          Supply department
-          Research and development department


(iii)            Weber’s Bureaucratic Organization
Unlike other contributors to the classical view of organizations, Weber was not a practicing manager, but an academic sociologist. He is the one who first coined the term bureaucracy to describe a particular organizational form, which exists to some extent in every large enterprise whether in public or private sector.

In Weber’s view the bureaucratic organization was a logical rational organization which was technically superior to all other forms. The key elements in the ideal bureaucratic organization were as follows:

-          A well defined hierarchy of legitimate authority.
-          A division of labour based on functional specialization.
-          A clear statement of the rights and duties of personnel.
-          Rules and procedures in writing should exist to deal with all decisions to be made and situations to be handled.
-          Promotion and selection based on technical competence.

In Weber’s view a de-personalized form of organization would minimize the effect of human unpredictability. Weber concentrated on the structural aspects of organizations and in consequence took a rather mechanistic impersonal standpoint.

Weaknesses of the bureaucratic model
·         Adaptability and change are made more difficult because of standardized rules, procedures and types of decisions.
·         Rules tend to become important in their own right rather than as a means of promoting efficiency.

The contribution of the classical theorists can be summarized as follows:
¨       They introduced the idea that management was a suitable subject for intellectual analysis.
¨       They provided a foundation of ideas on which subsequent theorists have built.
¨       Criticism of their work has stimulated empirical studies of actual organizational behaviour.

Human Relation School
The human relations school of organizations studied human individuals in the organization from a psychological point of view. The approach is based on a series of experiments conducted in the 1920’s at the Hawthorne Western electric plant by Mayo.  The experiment revealed that an organization was more than a formal structure or arrangement of functions. The results of his research focused attention on the behavioural approach to management and he concluded that an organization is a social system, a system of cliques, grapevines, informal status systems, rituals and a mixture of logical and non-logical behaviour.

Concepts of the human relations approach
Some of the concepts which Mayo and other workers in the human relations field developed after studying the role of individuals, informal groups, inter-group relationships and the formal relationship with the organization are as follows:

a)      People are not motivated by financial factors but by a variety of social and psychological factors.
b)      Informal work groups have important roles in determining the attitudes and performance of individuals.
c)      Management requires social skills as well as technical ones.
d)     An organization is a social system as well as technical/economic system.
e)      Traditional authoritarian leadership patterns should be modified substantially to consider psychological and social factors and should become more ‘democratic’ in nature.
f)       Participation in work organization, planning and policy formulation is an important element in organizations. This meant establishing effective communications between the various levels in the hierarchy to ensure a free flow of information. The following are some of the individuals who carried on motivation research together with their theories.



(i)                 Abraham Maslow
Maslow developed the theory that people are motivated by a desire to satisfy their specific needs and that they tend to satisfy their needs progressively, starting with the basic psychological needs and moving up the hierarchy. He suggested five levels of human needs as follows:
Level 1 – Physiological needs e.g. self-satisfaction of sleep, hunger, thirst etc.
Level 2 – Security needs: protection against threats and danger.
Level 3 – Affiliation needs: needs for love and acceptance by others.
Level 4: Esteem needs: needs for respect, status and recognition.
Level 5: Self-actualisation needs: needs for self-fulfilment and self-development.

Thus according to Maslow’s hierarchy of needs, once a need from lower levels upwards is satisfied it ceases to be a motivator.

(ii)              Douglas McGregor
Maslow and the need hierarchy influenced Mcgegor when he developed his theory of management. He described theory X as the approach that governs most current thinking about work. The following is a summary of assumptions in theory X:

a)      Average man is inherently lazy.
b)      He lacks ambition, dislikes responsibility and must be led.
c)      He is resistant to change and is indifferent to organizational needs.
d)     Coercion and close control is required.

If theory X is adopted, management must direct, persuade and control activities of people and management must seek to coerce and modify people’s behaviour to fit the needs of the organization. Later, McGregor rejected the assumptions of theory X and proposed an alternative called theory Y. Some elements of theory Y are as follows:
a)      To the average man, work is as natural as rest or play.
b)      Motivation, potential for development, imagination and ingenuity are present in all people given proper conditions.
c)      Coercion and close control are not required.
d)     Given proper conditions people will seek out responsibility.

The implication of this theory in management if adopted is that, management tries to harness qualities of people by arranging conditions and methods of operations so that people can achieve their own goals best by directing their efforts towards organizational objectives. Cooperation rather than coercion is required.

(iii)            Schein
Schein has combined a number of the different models and assumptions about individuals in organizations into a model he calls ‘complex man’. His model suggests that the individual is both complex and highly variable, and has many motives that may change over time. A person can learn new motives and will become productively involved in an organization for a number of different reasons, responding in different ways to different managerial strategies. The result of this view is that mangers need to adopt and vary their behaviour in accordance with the motivational needs of particular individuals and groups and the task in hand.

(iv)             Fredrick Herzberg
From his research Herzberg concluded that certain factors are helpful to job satisfaction, which he termed motivators and certain factors could lead to dissatisfaction, termed hygiene factors. The following is a summary of the major factors found in the two groups:

Hygiene Factors
Motivators
Policies and administration
Supervision
Working conditions
Money
Job security
Relationship with peers and subordinates
Achievement
Recognition
Responsibility
Growth
Development and growth

Note that the motivators are related to the content of the job whilst the hygiene factors are more related to the environment of the work and not intrinsic to the job itself. The two sets of factors are not opposite. Hygiene factors do not induce job satisfaction by themselves. To promote positive satisfaction motivators are needed. For example, in production; hygiene factors maintain production while motivators increase output.

(v)               Rensis Likert
From his research, Likert found that successful managers built their successes on tightly knit groups of staff whose cooperation had been obtained by close attention to a range of lower and higher order motivational factors. Participation was arranged and supportive relationship within and between groups was fostered.

System Contingency Approach
These theories developed from two main sources:
¨       The classic school with its somewhat mechanistic emphasis on structures which could be imposed on people.
¨       Human relations school whose laudable concentration on the needs of the individual to an extent obscured study of the organization as a whole.
Modern theorists attempted to develop from these earlier ideas a more comprehensive view in organization. One major approach they developed is a system approach which sees the organization as a total system of interconnected and interactive subsystems. The organization was found to respond to numerous independent variables of which the following are important:
  • People
  • Tasks
  • Organizational structure
  • Environment

In contrast with earlier approaches which considered variable in isolation, system theorists study the relationship between several of them. System theorists have suggested that there is no one best way of designing organizations, and because of volatility and change the best way is dependent (or contingent) upon prevailing conditions. Thus the development of the contingency approach.

Contingency Theory
This is the most current school and it sees each organization as a unique system resulting from an interaction of subsystems with the environment. The motto of contingency theory is ‘it all depends’. Both system and contingency approaches recognize organizations as a complex structure with many interacting elements which must continually adapt to uncertain and changing environment. Some of the major contributors to this approach are:

1.      Lawrence and Lorsch
The two studied the operations of a number of firms to assess the effects on the tasks and attitudes of managers in various functions operating with different structures and environment. Some of the major contributors to this approach are:
a)      The more volatile and diverse the environment, the more task differentiation, and consequent integration, is required to achieve successful organization.
b)      More stable environment does not require much differentiation but still requires substantial integration within the functions that exist.
c)      It is more difficult to resolve conflict in organizations with a high degree of differentiation between the functions that exist.
d)     Better methods of conflict resolution result in higher performance and lead to types of differentiation and integration that suit the organizations environment.
e)      In a predictable environment integration is achieved through the management hierarchy, particular at higher levels and through rules, procedures, budgets etc. In an uncertain environment, integration is achieved at lower levels mainly through personal interrelationship with only a moderate use of administrative methods.
 In spite of some criticism the Lawrence and Lorsch study received, it played an important role in development of organizations theory which took account of change, uncertainty and the interaction of key variables.

2.      Burns and Stalker
These two carried out a study of a number of electronic firms to see how they adapted to changes in their environment, particularly with regard to changes in the market and technical conditions. The result of their study was a classification of organization into mechanistic and organic systems.

Properties of mechanistic systems
a)      Stable environment with high certainty and predictability.
b)      High functional specialization.
c)      Detailed differentiation of duties and responsibilities.
d)     Hierarchical control, authority and communications with largely vertical interactions.
e)      Authorization style with clear superior subordinate relationships and emphasis on loyalty and obedience.
f)       Low rate of innovation.

Properties of organic systems
a)      Uncertain environment, low predictability.
b)      Low functional specialization.
c)      Less structured management with more adjustment and re-definition roles.
d)     More consultation with information and advice being communicated rather than decisions and instructions.
e)      High rate of innovation.

Examples of mechanistic organization system
Traditional industries such as steel, textiles and ship building where management controls and methods are based on well-defined rules and procedures which experience little change.

Examples of organic systems
Industries facing rapidly changing environment such as computers, pharmaceuticals etc.

3.      Joan Woodward
She carried out a study of manufacturing firms where she observed that many organizational characteristics were already related to the technology used. She categorized organization on the basis of technology as follows:

a)      Small batch or individual item production.
b)      Large batch or mass production including assembly line production.
c)      Continuous process production including refinery, chemical and gas production.

Based on this categorization, Woodward found that there were clear patterns relating to things like the span of control, chain of command and system of management. A summary of the various features are illustrated in the figure below,

Categories of technology
Types of management systems

Small batch / individual items
Large batch / mass production
Continuous production
Number of levels in chain command
Few
Medium
Numerous
Span of control – Top Management
Small
Medium
Large
Span of control – middle management
Large
Medium
Small
Ratio of management / operatives
Low
Medium
High
Types of management system
Mainly organic system with fewer rules and close personal interrelationship
Mainly mechanistic with clear-cut procedures and more rules and impersonal relationships.
Mainly organic with fewer rules and close personal relationships.
Communication
Mainly verbal with little paperwork.
Mainly written with considerable paperwork.
Mainly verbal with little paperwork.

Woodward concluded that the method of production was an important factor affecting organization structure and that there was a particular type of structure and management style suitable for each of the types of production.

4.      Aston University group led by Pugh
The group continued on the work of Woodward and found that size was an important factor in determining structure as well as the technology used. As firms grow they become more formally structured and the study found that large size tends to lead to:
a)      More standardization
b)      More of structures, procedures and decision rules.
c)      More specialization of tasks and functions.
d)     Less centralization, that is, concentration of authority.

3.2 Organizational structures
  • Traditional management organization is essentially hierarchical
Traditional structure
  • Characterised by
    • strong line management structure, emphasised through reporting and responsibility
    • 'top' management is responsible to board of directors and shareholders
    • departmental structure is usually based on functions (sales, production, finance etc.)
    • management of information systems is usually the responsibility of a specific IS department
    • there is limited scope for cross-departmental operations
  • managers at each level in the organisation are responsible for
a)       planning development
b)       organizing resources
c)       staffing
d)      directing employees
e)       controlling operations
  • Managers at different levels in the hierarchy place different emphasis on these functions:
    • top-level managers deal mainly with strategic decision-making (long-term planning)
    • 'middle-level' management focuses on tactical decisions, which are primarily to do with organizing and staffing
    • 'low-level' management deals with the day-to-day running of the organisation, and are mostly involved in directing and controlling
  • the major impact of computers on organisations that have this structure has been to 'flatten' this hierarchy, largely by reducing the role of middle (and to some extent low) management
  • the traditional system provides processed information on the functioning of the organisation to the higher levels
  • with centralised computer systems, data collected by the various departments was fed to the IS department, where it was processed ready for analysis by management
  • extensive use of distributed computing allows the processed information to be supplied directly to higher levels of management
  • this is almost always based on the use of some form of information system; these are generally termed management information systems (MIS)

3.3 Organizational hierarchy




















In an organization information flows vertically (up and down) among management levels and horizontally (across) among department levels. There are five basic functions found in an organization:

a)      Accounting: Keeps track of all financial activities.
b)      Manufacturing/Production:  Makes the company’s product.
c)      Marketing: Advertises, promotes and sells the product.
d)     Human resources: Finds and hires people and handles personnel matters.
e)      Research: Does product research and relates new discoveries.

There are three management levels in most organizations:

  1. Supervisors
Supervisors manage and monitor the employees or workers. They are responsible for the day-to-day operational matters. An example of a supervisor’s responsibility would be to monitor workers and materials needed to build the product.

Supervisors get information from middle managers above them and workers below them (primarily vertical). They need internal information for operational planning. They need detailed, current day-to-day information. An example of a supervisor need for information is listings of current supplies, current inventory and production output.

  1. Middle Management
Middle managers deal with control planning, tactical planning and decision-making. They implement long-term goals of the organization. An example of a middle manager’s responsibility would be to set sales goals for several regions.

Middle-level managers get information from among all departments (horizontally) and from all levels of management (vertically). They need historical internal information for tactical planning. They need summarized information such as weekly or monthly reports. An example of a middle-level manager information need would be to develop production goals with concurrent information from top-level managers and supervisors.

  1. Top Management
Top managers are concerned with long-range strategic planning. They need information to help them plan future growth and direction of the organization. An example of a top manager’s responsibility would be to determine the demand for current products and the sales strategies for new products.

Top-level managers get information from outside the organization and from all departments (horizontally and vertically). They need future-oriented internal and external information for strategic planning. They need information that reveals the overall condition of the business in a summarized form. An example of a top-level manager information need would be to plan for new facilities.


REINFORCING QUESTIONS
QUESTION ONE
(i) Define the following terms with regard to the study of systems.
(a)   System
(b)   System theory
(c)    Closed loop system                                                                      (6 marks)
(ii) Describe with the aid of a diagram the components of a closed loop system.                                                                                                                                                 (14 marks)
                                                                                                (Total: 20 marks)

QUESTION TWO
(i) Define the following terms:
a)      Entropy
b)      Factoring
c)      Equifinality
d)     Sub-optimality
e)      Synergy                                                                                         (10 Marks)
(ii) List the properties that are characteristic of all systems.                                   (10 marks)
                                                                                                            (Total: 20 marks)

QUESTION THREE
(i) Describe the three management levels in organizations, discuss their various informational needs and responsibilities. Give examples.                                 (12 Marks)

(ii) Organizational theory is the body of knowledge relating to the philosophical basis of the structure, functioning and performance of organizations. Such theory is derived from historical schools of thought stating the point of view of a number of early pioneers of management. Briefly describe four of these schools of thought.                      (8 Marks)
                                                                                                            (Total: 20 marks)

QUESTION FOUR
(i) Why is the study of systems theory and control systems useful?           (10 Marks)
(ii) A system may be coupled to varying degrees for various reasons. What is coupling in a system? What do you understand by the terms tightly coupled systems, loosely coupled systems and decoupling a system?                                                       (8 Marks)
(iii) Differentiate between an open system and a closed system.                (2 Marks)
                                                                                                (Total: 20 marks)

QUESTION FIVE
(i) What is a control system? Describe the basic elements of a control system.                                                                                                                                          (10 marks)
(ii) Describe the various components of a system.                                       (10 marks)
                                                                                                (Total: 20 marks)

CHECK YOUR ANSWERS WITH THOSE GIVEN IN LESSON 9 OF THE STUDY PACK



COMPREHENSIVE ASSIGNMENT NO.1
TO BE SUBMITTED AFTER LESSON 2

To be carried out under examination conditions and sent to the Distance Learning Administrator for marking by Strathmore University.
Time Allowed: 3 Hours                                           Attempt any FIVE questions

QUESTION ONE
Define the following terms:
(a)   A timesharing system                                                                  (5 marks)
(b)   A real time operating system                                                      (5 marks)
(c)    Sub-optimization, give an appropriate illustration.                  (5 marks)
(d)   Cybernetic systems                                                                      (5 marks)
(e)    Stochastic systems                                                                        (5 marks)
(Total: 20 marks)

QUESTION TWO
Computers rely on the operating system which is one of the system software components to perform its various functions. Briefly describe what an operating system is, its various functions and the services it offers to the computer resource users, including applications and users.                                                           (Total: 20 marks)

QUESTION THREE
(a) What are utility programs? Give five examples of utility programs.    (12 Marks)
(b) Giving appropriate examples differentiate between positive and negative feedback.                                                                                                                                    (4 marks)
(c) What is a deterministic system?                                                               (4 Marks)
                                                                                                             (Total: 20 marks)

QUESTION FOUR
Discuss the various characteristics and distinguish between online systems and real-time systems. Give at least two appropriate examples of such systems.                                                                                                                                                             (Total: 20 marks)

QUESTION FIVE
Differentiate between batch and online processing. What are the advantages and disadvantages of each processing mode?                                           (Total: 20 marks)

QUESTION SIX
(a) Define the following terms:
(i)                 Magnetic Ink Character Recognition (MICR)
(ii)               Optical Mark Recognition (OMR)
(iii)             Optical Character Recognition (OCR)
(iv)             Computer Output to Microfilm (COM)                                (4 Marks)

(b) In order to understand computer and information systems, an understanding of general systems theory is necessary. Discuss the relevance of systems theory to information systems study.                                                                                   (10 Marks)

(c) Discuss the various desirable qualities of good information.                 (6 marks)
                                                                                                             (Total: 20 marks)

QUESTION SEVEN
(a) What factors would you consider when purchasing a software application for your department?                                                                                                              (6 Marks)

(b) What is cache memory and of what usefulness is it in a computer system.  (4 Marks)

(c) What do the following acronyms stand for?
(a)   ASCII
(b)   EBCDIC                                                                                  (4 Marks)
(d) List six major application areas of computers.                                       (6 Marks)
                                                                                                                 (Total: 20 marks)

QUESTION EIGHT
(a) What are the advantages and disadvantages of proprietary software and off-the-shelf software packages?                                                                                                (10 Marks)   

 (b)
(i) What is a Database Management System (DBMS)?                                (2 Marks)
(ii) What are the advantages of using a DBMS?                                          (8 Marks)
                                                                                                               (Total: 20 marks)



END OF COMPREHENSIVE ASSIGNMENT No.1
NOW SEND YOUR ANSWERS TO THE DISTANCE LEARNING CENTRE FOR MARKING

No comments:

Post a Comment