cards-deck: 100-199_university::111-120_theoretic_cs::111_algorithms_datastructures

Run-time analysis:

anchored to [[111.00_anchor]]


Overview:

Whenever we would like to evaluate the running time of an algorithm we have to take some metrics that denote this behavior and their occupied time.

Simply taking physical time is not feasible to evaluate the time of an algorithm, simply because its dependant on hardware, thus we can’t use those measurements to account for analysis

Instead we are introducing a simplified model of a computer that simplifies the process of evaluating running-times:

RAM-machines | Random Access Machine:

Traits of a RAM-machine:

  • they have sequential processing because parallel processing would be too difficult as of now ( or is not possible [[112.04_non_deterministische_automaten]])
  • it features uniform memory –> all access to memory takes the same amount of time
    • so we don’t have optimization like caching, memory hierarchy etc –> its just one uniform memory space.

Uniform operations:

This defined ram-machine requires a specific time for an operation. Those operations and their time taken are defined as follows:

  • load one bit from memory –> takes one unit of measurement “Rechen-Einheit”
  • write one bit of memory –> takes one unit
  • Elementary arithmetic operation for numbers of bounded length, we assume that we can add, subtract, multiply, divide them in a constant time
  • Similarly, each logical operation (and, or not ) is an elementary operation ( so )

All those operations take one unit of time –> one Rechen-Einheit

Examples | calculating amount of operations:

cost of adding two numbers that are represented with N bits:

How many time units does the computation of the two numbersrepresented with -bits take? #card
Without a constant ( or better actually defined ) size for a number (64bit for example) it would take time-units ^1704708599285

To give an example: Lets assume that: and the actual computation is denoted with how many operations would we have now? #card
To conclude we have operations to work with ^1704708599293

Now to look at multiplication: Lets assume that and the actual task is how many computations would this take? #card with each number we have to multiply all others. To visualize that would be:

  • -> first multiplication
  • -> second
  • -> third
  • -> fourth after that we have to add all numbers together –> the size of this complete addition would then be or ^1704708599298

Further examples:

  1. add two standard integers (int) :: O(1) so its a constant operation ^1704708599302

  2. write matrix into memory :: also eine Matrix mit Spalten, Zeilen ^1704708599306

  3. Read string of length : because its dependant on the size

Worst-Case Laufzeiten:

Mit Worst-Case Laufzeiten betrachten / beschreiben wir die Umstände zur Ausführung eines Algorithmus, die die längste Laufzeit verursachen kann. Dabei ist wichtig, dass es sich um eine theoretische Schranke handelt.

[!Tip] Intuition for worst-cases: Worst cases are crucial for safety-critical systems because they are only setting the upper bounds of the running time of any particular instance. Some algorithms behave very well in practical thus the constructed theory may lead to rly bad worst case scenarios that are not as bad normally because they are rather rare. Example: Sorting an array with a good in practice system that yet still poses a heavy worst case scenario in theory. Although this worst case scenario exists it may not occur often while the good / average cases are common.

Betrachten wir für ein Problem alle Instanzen , dann möchten wir damit bestimmen, wann der Algorithmus am längsten brauch, um das Problem zu lösen.

Dafür definieren wir folgend:

  1. als die Menge aller Instanzen der Länge –> also abhängig von
  2. Wir beschreiben die Laufzeit mit auf einer gegebenen Instanz Damit können wir die Worst-Case-Laufzeit folgend beschreiben: wie? #card –> also die maximale Laufzeit aus der Menge aller Laufzeiten ( obviously) bildet die worst-case running time. ^1704708599312

Average-Case Laufzeiten:

Wenn wir die Average-Case Laufzeit betrachten, dann möchten wir damit erfassen: -> Wie lange brauch der Algorithmus im Durchschnitt bei -vielen Instanzen?

Dafür ist es auch wieder notwendig einige Grundlagen zu definieren: Let denote the space of all instances of length Further we are denoting the time taken by defining it with , where is the instance. We can now define the average case running time as follows: #card

[!Tip] Intuition Das heißt wir nehmen die Menge der Instanzen, summieren deren Zeit, die resultierte und anschließend teilen wir sie durch die Menge der Instanzen, die durchgeführt wurden. Also eine einfache avg-Rechnung ^1704708599316

There are some subtleties regarding avg-case times, because they seem to depend on the amount of instances. This may pose some issues but we are not focusing on this aspect in this lecture.

Spezialfälle bei Laufzeiten:

Es gibt diverse Beschreibungen, die angewandt werden und eine explizite Laufzeit meinen. Sie beschreiben hier also eine spezielle Landau-Notation [[111.04_laufzeitanalyse_onotation|Onotation]]

linear :: ^1704708599320

sublinear :: ^1704708599324

superlinear :: (echt größer ) ^1704708599328

polynomial :: ( genau gleicher Anstieg, wie ein Polynom abhängig von N ) ^1704708599333

exponential :: ^1704708599336

Space complexity:

[!Important] it is necessary to know how much capacity an algorithm is utilizing / requiring

For this lecture we will use the structure of the One-model unit. meaning that we are setting the size for the following units:

  • a letter
  • a logical bit
  • a number of fixed / bounded length

[!Info] Reasoning with space complexity: #card For many problems there is a certain space/time tradeoff available.

That means:

  1. Some algorithms are very fast, but need a lot of space
  2. Other algorithms are very slow, but need nearly no storage er and close to no space

As example: One could calculate the Fibonacci at runtime or storage to retrieve the cache later ^1704708599340