Hi, I'm Evelyn a cs-student at the university of tuebingen in Germany.

I "built" this website to gather experience with git-hooks, experience with mdbook and its features to parse md-files to html, well and to provide a space for:

  • my ideas
  • thoughts
  • projects
  • scripts and documentations
  • maybe some scripts.

Goal and motivation:

Besides the reasons given above I generally wanted to share different thoughts and findings / experiences on some platform that are not dependant on another platform - i.e. social media or discord. Hence I've always thought of creating a webpage where I could give those things a foundation to rest on yet my first webpage was not updated often and I dropped the idea - primarily because I had no CMS running and it was entirely hardcoded, meaning I had to connect to the server whenever I wanted to change an information or write new entries - well and consider that i would have to update internal links, references and more, its a pain without any dedicated CMS running!

For some years - since 2022 I think - I've been logging, collecting and organizing my material, education and more via Obsidian and grew accustomed to the idea of writing everything down in markdown. With that idea in mind my optimal solution would've been a structure where I could write down things in Obsidian and then copy and link them to a repo which I could then update and push to some repo that would build a website out of it. At first I was thinking of doing that myself and thought this to be a good chance to work and experience Rust a little more, however I conceptualized a lot but had no to time to actually begin writing this all. While tutoring software engineering - 2023 - we began posting the script for our lecture via mdbook and that made me think whether I could make use of this infrastructure to finally deploy a website without much effort at all.

I describe the whole concept and idea of this blog in another post, here.

what I'm about

I have a wide field of interests - double-sided sword tbh - which occupy most of my free-time and around. Those might include:

  • 3d printing
  • programming
  • soldering
  • keyboards
  • music
  • games
  • organizing things
  • algorithms
  • networking
  • science things
  • politics
  • data privacy - ironic sharing all those information about myself here, huh?

and likely some more which I'm not listening or forgot about.

Some Quotes I like:

"We are perpetually trapped in a never ending spiral of life and death".

Tormented by the fear of not being necessary

The intimate desire of acceptance

Overwhelmed by the fear of being alone

Documentation is a love letter that you write to your future self

In case you would like to contact me I provide the following option(s):

-> mail:

where you might find me too:

Computer Science

Everything within this anchor might be linked to my education and contain information that I gathered during my studys

IP-basics | Ipv4/6

anchored to [[143.00_anchor]]

#Study #Network

This denotes the first lesson of the internet praktikum where we were discussing the basics of networking and how systems are communicating between networks.

TCP / IP Model

With the TCP-IP model we introduce a stack of 5 layers to describe different services / areas within networking.

Specifically those are defining protocols, operations, and allow to abstract over those layers - meaning that all upper layers depend on the interface provided by the ones below.

-> They build onto each other

Specifically we denote 5Layers here:

  1. Physical Layer
  2. Data Link Layer
    • interface for allowing communication between systems on a physical level --> MAC-Addresses that are unique to each physical device are given here
  3. Network Layer
    • layer to provide routing of packets between networks - routers and systems - --> logic on how to send a packet from a to b
  4. Transport Layer
    • provides either TCP / UDP to allow End-To-End Transmission between two hosts
  5. Application Layer
    • contains applicationspecific information - given by program that needs to be transported from system to system

With defined structure above we can then take a look at the strucutre of a packet sent through the network:

Pasted image 20240226170811

With given image what can we denote regarding transport of a packet for some application? Benefits? #card

We abstract and are encapsulating the uppermost packet with every information required for the lower parts one by one. That way they are all independent of each other and making it easier for us to build / adapt the communication.

The structure of such Payload will simply hold additional information the further we go down in the layers:

Pasted image 20240226171055 We also call this a Payload Data Unit - PDU

Addresses and routing

We need a way to address systems / computers within a network to allow them to communicate accordingly. We can define two different sets of addresses that will help us with networks!


[!Information] MAC-Addresses which layer, whats their purpose? #card

MAC-Addresses are used to identify a network interface - nic - and are mostly unique.

They consist of 48Bit where the first 24 Bit are usually denoting the Organizationally unique identifer --> who manufactured this interface

Important to note:

  • those mac-addresses are not bound to region or similar, they are purely unbound to anything

--> This also introduces the issue with these, and why they dont suffice for networking what exactly is creating an issue here? #card

  • If we were to route packets globally we would require some identifier to denote a location --> like saying aa this belongs to network X but we don't have any linking / correlation between such information so its not possible

Hence we require something on another layer - above:

IP-Addresses | L3

Now to enable us to communicate outside of the own network and to connect different networks together we require the IP - Internet Protocol.

[!Definition] IP _what are traits of this protocol, what does it enable, how? #card

The IP protocol gives the following traits:

  • connectionless communication -> no setup in establishing a communication involved
  • Packet Switching -> splitting up a large chunk of data into smaller pieces to send them. Might be necessary to allow sending large files over a connection that does not allow those larges packets completely --> splitting them up is required!
  • its not reliable
    • error correction not deployed
    • loops are possible during connection
    • ip cannot fix issue of layer 2
  • ICMP to indicate issue
    • echo / ping
    • TTL exceeded
  • No flow control

We divide into Ipv4 and Ipv6

We may take a look at a datagram of a IPV4 packet: Pasted image 20240226172854 explain the given fields briefly #card

  • total length -> size of packet
  • Flags -> to denote whether to fragment the packet or not
  • offset -> denote where the received data fits into a fragmented packet ( like pos 2 of 1,2,3,4)
  • src addr / dest addr
  • Options -> IP options defined

ICMP - Internet Control Message Protocol

[!Definition] What is ICMP used for? #card

In case of errors / failures during transmission ICMP allows us to indicate / signal those errors for the following issues:

  • dest unreachable
  • echo request / reply -> ping
  • TTL exceeded
  • ... some additional ones

Pasted image 20240226173156

Its send after a request --> so only issued by a request not send itself

IPv4 Address Structure

With IPV4 we have 32 bits available. what are we grouping those into? #card We divides those into two groups: Subnet-Prefix and Host-Identifier ( well technically also Network identifier, but Network-identifer + Subnet-identifier are equal to the subnet prefix )

[!Tip] Hosts in same subnet #card

Hosts in the same subnet can reach each other without any routing necessary ( no extra router necessary).

Communication between neighbours

Now with the given addresses of layer 3 we would like to establish a way to communicate between hosts in a local network.

For that we have to somehow match the unique identifier of an interface - MAC-Address - with the ip-address defined in the local network.

Introducing ARP:

ARP - Address Resolution Protocol (v4)

Arp is rather simple in its structure: how does it work? #card

A given host is frequently querying whom a given ip-address is belonging to on the broadcast channel of the network This packet is sent to all attached devices and the one matching the ip will then respond to the given device that issued the request. So we have the following structure: HOST_X -> Who has Ip X.X.X.X HOST_Y -> does not answer, not its ip HOST_Z -> registers its being asked, sends back its MAC + IP via Broadcast or direct Unicast

Subnetting | Networks within Networks

For further insights about subnetting consult 143.06_subnetting

How are we defining a subnet? #card By using subnet-masks ( also 32 bits long) we can set a given range - denoted by 1 to be part of the subnet-identifier, whereas the rest is up for host-identifiers to fill.

How do subnet-masks work?

Consider a subnetmask of /24 -

If we take a look at some ip-address like and compare both address bitwise we will gather 192.168.20 as the subnet-identifier and 20 as the host id.

IT allows us to split an ip-address into different portions. For the example above that does not change much / is not really helpful but as soon as we get subnet-masks that "tower into an octet" of an address it an get difficult to easily identify which subnet this ip-address belongs to.

see also: notions_for_gdi

Classfull Addressing

Belows information are taken from here

--> Classful addressing is an IPv4 addressing architecture that divides addresses into five groups.

Prior to classful addressing, the first eight bits of an IP address defined the network a given host was a part of. This would have had the effect of limiting the internet to just 254 networks. Each of those networks contained 16,777,216 different IP addresses. As the internet grew, the inefficiency of allocating IP addresses this way became a problem. After all, there are a lot more than 254 organizations that need IP addresses, and a lot fewer networks that need 16.7 million IP addresses to themselves.

Simply put: we needed a way to more efficiently allocate addresses. In 1981, RFC791 and classful addressing came along to help solve that problem. With classful addresses, we went from just 254 available networks to 2,113,664 available networks. How?

How classful addressing works

Classful addressing divides the IPv4 address space ( into 5 classes: A, B, C, D, and E. However, only A, B, and C are used for network hosts. Class D, which covers the IP address range, is reserved for multicasting, and class E ( is reserved for “future use.”

The table below details the default network mask (subnet mask), IP address ranges, number of networks, and number of addresses per network of each address class.

IPv4 address
Number of
IPv4 Networks
Number of
IPv4 addresses
per network
IPv4 address range
A255.0.0.012816,777,2140.0.0.0 –
B255.255.0.016,38465,534128.0.0.0 –
C255.255.255.02,097,152254192.0.0.0 –

As we can see, Class A continues to use the first 8-bits of an address, and may be suitable for very large networks. Class B is for networks much smaller than Class A, but still large in their own right. Class C addresses are suitable for small networks.

What are the limitations of classful IP addressing?

While classful IP addressing was much more efficient than the older “first 8-bits” method of chopping up the IPv4 address space, it still wasn’t enough to keep up with growth.

As internet popularity continued to surge past 1981, it became clear that allocating blocks of 16,777,216, 65,536, or 256 addresses simply wasn’t sustainable. Addresses were being wasted in too-large blocks, and it was clear there’d be a tipping point where we ran out of IP address space altogether.

One of the best ways to understand why this was a problem is to consider an organization that needed a network just slightly bigger than a Class C. For example, suppose our example organization needs 500 IP addresses. Going up to a Class B network means wasting 65,034 addresses (65,534 usable Class B host addresses minus 500). Similarly, if it needed just 2 public IP addresses, a Class C would waste 252 (254 usable addresses – 2).

Any way you look at it, IP addresses under the IPv4 protocol were running out, either through waste or the upper limits of the system.

The Issue with Classful IP Addresses

(taken from here)

The main issue with classful IP addresses is that it wasn't efficient, and could lead to a lot of wasted IP addresses.

For example, imagine that you're part of a large organization back then. Your company has 1,000 employees, meaning that it would fall into class B.

But if you look above, you'll see that a class B network can support up to 65,534 usable addresses. That's way more than your organization would likely need, even if each employee had multiple devices with a unique address.

And there was no way your organization could fall back to class C – there just wouldn't be enough usable IP addresses.

So while classful IP addresses were used around the time IPv4 addresses became widespread, it quickly became clear that a better system would be necessary to ensure we wouldn't use up all of the ~4.2 billion usable addresses.

Classful IP addresses haven't been used since they were replaced by CIDR in 1993, and are mostly studied to understand early internet architecture, and why subnetting is important.

To fix this issue of wasted potential addresses a way to dynamically set subnets and subnet-masks was constructed:

CIDR Classless Inter-Domain Routing

--> We want to be able to vary in ranges / balances of Ip-addressse available and subnet-possible according to needs one may encounter - housing student members as example here at #Netzak :

[!Definition] CIDR - Classless Interdomain Routing what does it enable? #card

Here we can now set subnets of abritrary length instead of the fixed classes definde ( by IANA )

Meaning that we can simply set the used subnet at the end of an address-range, like -> denoting subnetmask with 20 1-bits from left to right

IPv6 | cus v4 is too small

Since 199X it was apparent that the amount of addresses available with Ipv4 was not enough and thus needed an extension. This was now provided with Ipv6:

[!Definition] Ipv6 what are its traits? #card

128 Bit s for declaring an ip-address

Its denoted in 4 Hexadecimal in blocks of 8 divided by ":", multiple "0000" can be minimized to "::"

  • 0123:4567:89ab:cdef:0123:4567:89ab:cdef
  • abcd:0000:0000:0000:0000:0000:1234:5678 → abcd::1234:5678 there are no defined network classes -> still subnets! the header is different too ICMP -> ICMPv6 DHCP -> DHCPv6 ARP -> NDP

For a good overview - made by Ripe - consider the following file ripe_ipv6-address-types

The following resources were provided during an internship - course at university of tuebingen

Relevant RFCs for IPV6 might be:

  • RFC3587 (2003): IPv6 Global Unicast Address Format (Obsoletes RFC2374) (Status: INFORMATIONAL)
  • RFC3769 (2004): Requirements for IPv6 Prefix Delegation (Status: INFORMATIONAL)
  • RFC4193 (2005): Unique Local IPv6 Unicast Addresses (Status: PROPOSED STANDARD)
  • RFC4291 (2005): IP Version 6 Addressing Architecture (Status: DRAFT STANDARD)

IPv6-Address shortening

[!Tip] Shortening IPv6 addresses, by leaving out blocks of 0:

In RFC3513 states that:

  • The use of :: indicates one or more groups of 16 bits of zeros.
  • The :: can only appear once in an address.
  • The :: can also be used to compress leading or trailing zeros in an address.

ARP alternative for IPv6 - NDP - Neighbor Discovery Protocol

in its core this system is similar to ARP but comes with some additional features that allows advertising to the router directly, checking reachability to neighbours - NUD - and further

See also DaaD or SLAAC that use these protocols - afaik.

with those information we can basically construct a network of participants and have them communicate somehow. How this is done and further spread across networks is covered here:

Ip-Address configuration

Consider a set of devices we want to construct / use to build a network.

There are two options - well actually only one is feasible and good in long term:

  • manual configuration
  • DHCP

Manual configuration

If we want to manually set ip-addresses we ought to define them somewhere.

[!Important] Important here:

there are no checks to tackle possible duplicate assignments meaning that by mistake we could set an ip-address for two devices which will break communication with them --> Happened at #Netzak some time ago where Reutlingen was shutdown for some hours

Furthermore its tedious - and requires knowledge about networks, so not feasible for most - because it ought to be added for each device manually.

Hence we have a better autonomous solution:

Dynamic configuration

Instead of setting ip-addresses for each device on our own we could instead use DHCP - Dynamic Host Control Protocol and an according server.

how does DHCP roughly work? #card

With DHCP we have a server that is managing a given range of ip=addresses that it will dynamically assign to requesting / participating devices. For that a new device is sending an DHCP-Request via Broadcast in the network where the DHCP-server is listening. Upon reception of a request it will check whether the device is known already - stored in its db - and either send an answer with the previous ip-address assigned or sending an offer for a new address to take. The recipient will listen for this response and send an acknowledgement if its taking this Ip-address. that is confirmed once more and now we have a new device within the network that is able to communicate with its assigned ip-address.

This works for IPv4 (DHCP) and also IPv6 (DHCPv6) For IPv6 however we have an alternative for automatically assigning addresses in a network:

-> SLAAC - Stateless Address Autoconfiguration Here the client is receiving a network-prefix (64bit) via ICMPv6 - via Router Advertisement and will then choose an identifier for the latter (64bit). Its testing against possible duplicates by using DaaD

More specific information this operation can be found here: 143.18_dhcp

Routing Overview

anchored to 143.00_anchor

also covered in 143.03_dynamic_routing

What is meant with Routing ?

This section was taken from the - now archived - wiki from Cisco, linke here:

[!Definition] Routing Routing is the act of moving information across an internetwork from a source to a destination. Along the way, at least one intermediate node typically is encountered. Routing is often contrasted with bridging, which might seem to accomplish precisely the same thing to the casual observer. The primary difference between the two is that bridging occurs at Layer 2 (the link layer) of the OSI reference model, whereas routing occurs at Layer 3 (the network layer). This distinction provides routing and bridging with different information to use in the process of moving information from source to destination, so the two functions accomplish their tasks in different ways.

Routing involves two basic activities: determining optimal routing paths and transporting information groups (typically called packets) through an internetwork. In the context of the routing process, the latter of these is referred to as packet switching. Although packet switching is relatively straightforward, path determination can be very complex.

Path Determination

For routing and forwarding packets within a network efficiently we need means to evaluate which path to take - and which to define/consider best. This requires certain metrics to be added / defined in order to calculate the costs for different paths in the network.

[!Definition] Metrics (for graphs) #card Generally speaking a metric is nothing but a standard of measurement based on different aspects, like path bandwidth, hop count or similar.

In the context of routing within a network we establish and populate certain routing tables why? #card We use routing tables to denote the best possibility for a given router to forward packets assigned for a given subnet/address. Here each entry is denoting a path for a given subnet/host-id and further which interface and device to assign this to. Further each entry is usually the calculated minima of all possible routes --> Shortest path problems 111.22_Graphen_SSSP_dijkstra

Routing Metrics

Routing tables contain information used by switching software to select the best route. But how, specifically, are routing tables built? What is the specific nature of the information that they contain? How do routing algorithms determine that one route is preferable to others?

Routing algorithms have used many different metrics to determine the best route. Sophisticated routing algorithms can base route selection on multiple metrics, combining them in a single (hybrid) metric. All the following metrics have been used:

  • Path length
  • Reliability
  • Delay
  • Bandwidth
  • Load
  • Communication cost

Path length is the most common routing metric. Some routing protocols allow network administrators to assign arbitrary costs to each network link. In this case, path length is the sum of the costs associated with each link traversed. Other routing protocols define hop count, a metric that specifies the number of passes through internetworking products, such as routers, that a packet must take en route from a source to a destination.

Reliability, in the context of routing algorithms, refers to the dependability (usually described in terms of the bit-error rate) of each network link. Some network links might go down more often than others. After a network fails, certain network links might be repaired more easily or more quickly than other links. Any reliability factors can be taken into account in the assignment of the reliability ratings, which are arbitrary numeric values usually assigned to network links by network administrators.

Routing delay refers to the length of time required to move a packet from source to destination through the internetwork. Delay depends on many factors, including the bandwidth of intermediate network links, the port queues at each router along the way, network congestion on all intermediate network links, and the physical distance to be traveled. Because delay is a conglomeration of several important variables, it is a common and useful metric.

Bandwidth refers to the available traffic capacity of a link. All other things being equal, a 10-Mbps Ethernet link would be preferable to a 64-kbps leased line. Although bandwidth is a rating of the maximum attainable throughput on a link, routes through links with greater bandwidth do not necessarily provide better routes than routes through slower links. For example, if a faster link is busier, the actual time required to send a packet to the destination could be greater.

Load refers to the degree to which a network resource, such as a router, is busy. Load can be calculated in a variety of ways, including CPU utilization and packets processed per second. Monitoring these parameters on a continual basis can be resource-intensive itself.

Communication cost is another important metric, especially because some companies may not care about performance as much as they care about operating expenditures. Although line delay may be longer, they will send packets over their own lines rather than through the public lines that cost money for usage time.

Packet Switching

adapted from:

Considering a host to send a packet to a given router - wanting it to process and forward the packet sent - this is usually done with the packet being addressed to the routers mac-address while also adding the ip-address ( so both L2 and L3!).

The Router observing this packet will check its routing table and either find a path for the given ip-address to forward the packet to or not.

If there's no route available it will be discarded by the router. And if its known how / where to send the packet it will change the physical Address (L2) and send the packet in that direction. --> I then proceeds the same until the destination was met.

[!Tip] Important observation for packet-switching what can we observe regarding the addresses? #card During the process of packet switching we can observe that the physical address is prone to change throughout the process While the ip-address (L3) stays the same and points towards the destination host.

The preceding discussion describes switching between a source and a destination end system. The International Organization for Standardization (ISO) has developed a hierarchical terminology that is useful in describing this process. Using this terminology, network devices without the capability to forward packets between subnetworks are called end systems (ESs), whereas network devices with these capabilities are called intermediate systems (ISs). ISs are further divided into those that can communicate within routing domains (intradomain ISs) and those that communicate both within and between routing domains (interdomain ISs). A routing domain generally is considered a portion of an internetwork under common administrative authority that is regulated by a particular set of administrative guidelines. Routing domains are also called autonomous systems. With certain protocols, routing domains can be divided into routing areas, but intradomain routing protocols are still used for switching both within and between areas.


Consider two systems that are spread across the globe: They would like to exchange packets from a to b now which is going to take resources where they have to jump between different networks -> until they reach the destination.

At the edge of each network we will have a router that does the routing. how do they decide where to send a packet? #card Routers use Routing-Tables where they store information about addresses and how to reach them - or how to take the next step to get closer to them. Consider a router with 3 interfaces where each goes to a different network: 1:, 2:, 3: If we sent a packet with destination it would hand this request over to interface 3 as this network seems to reachable from there - likely the closest too -> which is denoted by dynamic routing! <-

[!Definition] Longest Prefix Matching: whats meant with that? #card
In case our requested address matches against many interfaces / entries in the table, we should then take the entry with the longest match --> where the prefix is equal the longest. this is important whenever we have large subnets with smaller ones within --> we match against the most specific because thats where the dest-address will be found in.

When a router or host performs a lookup in the routing table, it searches for an entry that has the longest match with the prefix of the destination IP address of the datagram. This is referred to as a longest prefix match. First the routing table is searched for a match on all 32 bits of the IP destination address. Since a match with a 32-bit prefix can occur only for a host route, host routes always take precedence over network routes. If there is no 32-bit prefix match, the routing table is searched for an entry that has a 31-bit prefix match. Then the routing table is searched for a 30-bit prefix match, and so on. If there is no match with a host route or a network route, then the default route is selected. Since the default route is searched last, routing tables often represent the default route as destination address, that is, a destination address with a 0-bit prefix. If there no match is found and there is no default route in the routing table, the datagram is discarded and an ICMP network unreachable error message is sent to the source IP address of the datagram.

Static Routing

-> statically setting how to reach something / where to send a packet with a given prefix. This might make sense for smaller networks but is tedious to maintain and generally not adaptable to changing circumstances.

Static | Dynamic routing

anchored to [[143.00_anchor]]

proceeds from 143.02_routing_basics #Study #Network

Denotes the second topic of the internet-praktikum talking about static and dynamic routing between networks - looking at different possibilities like DVR, LSR


We would like to find out how to route within a network or multiple ones - and take the fastest / best paths by calculating this dynamically!

Design Goals for Routing Algorithms

taken from here

Routing algorithms can be differentiated based on several key characteristics. which 3 can we define? #card

  1. First, the particular goals of the algorithm designer affect the operation of the resulting routing protocol.
  2. Second, various types of routing algorithms exist, and each algorithm has a different impact on network and router resources.
  3. Finally, routing algorithms use a variety of metrics that affect calculation of optimal routes. The following sections analyze these routing algorithm attributes.

Design Goals

Routing algorithms often have one or more of the following design goals:

  • Optimality
  • Simplicity and low overhead
  • Robustness and stability
  • Rapid convergence
  • Flexibility

Optimality refers to the capability of the routing algorithm to select the best route, which depends on the metrics and metric weightings used to make the calculation. For example, one routing algorithm may use a number of hops and delays, but it may weigh delay more heavily in the calculation. Naturally, routing protocols must define their metric calculation algorithms strictly.

[!Tip] Routing algorithms also are designed to be as simple as possible. In other words, the routing algorithm must offer its functionality efficiently, with a minimum of software and utilization overhead. Efficiency is particularly important when the software implementing the routing algorithm must run on a computer with limited physical resources.

Routing algorithms must be robust: which means that they should perform correctly in the face of unusual or unforeseen circumstances, such as hardware failures, high load conditions, and incorrect implementations. Because routers are located at network junction points, they can cause considerable problems when they fail. The best routing algorithms are often those that have withstood the test of time and that have proven stable under a variety of network conditions.

In addition, routing algorithms must converge rapidly.

[!Definition] Whats meant with Convergence ? #card Convergence is the process of agreement, by all routers, on optimal routes. When a network event causes routes to either go down or become available, routers distribute routing update messages that permeate networks, stimulating recalculation of optimal routes and eventually causing all routers to agree on these routes. Routing algorithms that converge slowly can cause routing loops or network outages.

In the routing loop displayed below, a packet arrives at Router 1 at time t1. Router 1 already has been updated and thus knows that the optimal route to the destination calls for Router 2 to be the next stop. Router 1 therefore forwards the packet to Router 2, but because this router has not yet been updated, it believes that the optimal next hop is Router 1. Router 2 therefore forwards the packet back to Router 1, and the packet continues to bounce back and forth between the two routers until Router 2 receives its routing update or until the packet has been switched the maximum number of times allowed.

Dynamic Routing

what do we mean with dynamic routing? #card -> dynamically constructing routing tables and deciding where to send something. This allows to adapt to changing environments -> i.e. a node being unreachable and constructing a more expensive yet reachable path to destination; or updating cost to dest based on shared information. Requirements:

  • exchanging information between routers so that they can construct their routing tables
  • calculating distances with given metrics - shortest paths in graphs basically, see [[111.21_algo_graphs_ShortesPathProblems]]

Reasons dynamic routing

As discussed previously here we have two options to route things through a network. Static routing is getting tedious and complicated fast and annoying with lots of participants in a network -> also requires manual adjustments.

Hence we've introduced dynamic routing which gathers information from all participants and calculates the best options within the system.

We would like to find the fastest/ shortest connections between two participants ( being hosts or networks even): Consider a network to be a graph how would we calculate the shortest path from a to b? #card

  • We ought to gather all necessary information about the network - to form a solution afterwards
  • we ought to calculate the paths -> after which metric
  • deciding how to forward - per router

For gathering the required information we can think of two principles:

Distance-Vector-Routing ( DVR )

[!Definition] traits of DVR? #card With DVR we are exchanging information about the network on a local scale meaning that only neighbours are communicating at a time. Here they are exchanging their current routing table --> and by doing so they may receive newer / better options to connect from a to c which they will include in their calculation of the shortest path --> which is then fed to the routing table. --> and it will be exchanged on the next iteration again and so forth. What we can observer:

  • we are exchanging the topology of the network in waves from each router --> because we only share information with our locals This means that propagating changes may take a while -> especially if two routers are far apart - have many hops in between

DVR - because of its local-exchange property - is an asynchronous system.

Calculating Distance

Once we've gathered the information to form a topology with costs in our network, we ought to compute best-paths for them too.

We are representing the graph as adjacency-matrix where we then evaluate the distance from a given point to another point over another point. At the end we will construct a matrix that contains all those information and we take the minimum for each connection.

--> Alternatively we could just calculate with [[211.08_near_optimal_alg_all_pairs_shortest_path.pdf]] floyd warshall or similar.

Issue with DVR

regarding the way it spreads information, what could be a problem for this approach? #card Also called the count to infinity problem we can observe that propagating changes takes a while and thus could lead to wrong forwarding/ routing tables --> i.e. trying to forward over a link that is unreachable now or changed costs heavily.

[!Definition] Count to infinity problem Consider a graph of vertices and weighted edges. Further we are considering some connection to node $b$ via $a$. Consider that we are now disconnecting $b$ from $a$. $a$ will be aware of the unreachable destination towards $b$ and propagate this information. However another $c$ may not know about this change and provides a path to $b$ via $a$ itself to $a$. However this node is not aware of this possible issue and assumes that $b$ is reachable through traversing to $c$ ( which in return will try to access it via $a$). This propagation of updated values will be shared through the network and increase until $\infty$

Solutions for this - count to infinity problem might be:

[!Tip] Split Horizon what does it describe? #card With Split Horizon we are attempting to solve the count to infinity problem by prohibiting a given router from advertising a route back onto its interface from which it was learned. This describes that: -> If a host "A" is sending data - with its routing table - to "B" over a given interface "I" it should not transmit routing table information about the host "B" over interface "I" --> We want to prevent sending information on how to reach some place to the place itself.

This in combination with Route poisoning can help mitigate the count to infinity problem:

[!Information] Route Poisoning what are we describing with this paradigm? #card In case a node "A" learns that a route to given destination "B" is unreachable it should then inform the network about this issue by "stating that the distance is $\infty$" from given node "A" --> that way we are able to indicate an unreachable route

Gains from this approach:

  • Knoten A schickt seinem Nachbarn E keine Routen zu Knoten, die er über E routen würde (Split Horizon)
  • Besser: Distanzwert unendlich für Rückrichtung von gelernten Routen (Split Horizon mit Poison Reverse)
  • Eliminiert einfache kurze Schleifen, aber nicht alle.

Example Protocols

Enhanced Interior Gateway Routing Protocol

  • replaced IGP
  • builds upon DVR

Routing Information Protocol

  • old, but known DVR protocol
  • with a max of 15hops its limited in network-size
  • shortest hop taken as best route --> although worse connection via serial for example

Path vector routing | Variant of DVR

[!Information] additional traits of path-vector-routing what are the changes proposed? #card

This denotes another variation of the DVR which tries to remove the Count-To-Infinity problem by replacing distance_vectors with vecotrs containing both the distance - after all we require the shortest path - and additionally the pfadinformation - what we traversed / what we will traverse upon taking this path. With this setup we can easily find loops!

Instead of DVR and its local approach for sharing information about a network, we supply information to the whole network upon change.

how is this accomplished, what are its advantages? #card By flooding the known parameters - by each machine and its local neighbours - to all participants via broadcast messages we can gather intel on the topology of the network rather fast. Once all those flooded packets were received we have intel on the whole topology and can calculate the shortest path. --> this will now equal Dijkstra or similar as every information is known!

Link-state algorithms (also known as shortest path first algorithms) flood routing information to all nodes in the internetwork. Each router, however, sends only the portion of the routing table that describes the state of its own links. In link-state algorithms, each router builds a picture of the entire network in its routing tables. Distance vector algorithms (also known as Bellman-Ford algorithms) call for each router to send all or some portion of its routing table, but only to its neighbors. In essence, link-state algorithms send small updates everywhere, while distance vector algorithms send larger updates only to neighboring routers. Distance vector algorithms know only about their neighbors.

Because they converge more quickly, link-state algorithms are somewhat less prone to routing loops than distance vector algorithms. On the other hand, link-state algorithms require more CPU power and memory than distance vector algorithms. Link-state algorithms, therefore, can be more expensive to implement and support. Link-state protocols are generally more scalable than distance vector protocols.

While routing packets through a network we could potentially encounter routing loops:

Routing Loops | and how to prevent them:

theres a good blog entry by cisco 222.26_preventing_network_loops_cisco that explains and gives some insights on both L2 / L3 loops!

Yet another explanation with good examples - also showing the count to infinity problem and how to prevent it, might be found 143.08_routing_loops_in_networks

[!Tip] General good source for various network topics:

Routing through within the internet

When talking about routing in the internet we differentiate between two categories:

  • within an AS - autonomous system, so held and operated by a single entity
  • between those AS - backbone-ish communication

Because we categorize into within AS and outside of AS we also have to define routers differently:

  • routers within AS --> IG (connecting part of internal network together ) -> interior gateway
  • routers at border of AS -> EG ( connecting to backbone and other AS) - exterior router

Each of the two sections require different approaches to communicate information which we split into IGP and BGP:

IGP - interior Gateway Protocols:

are used in AS or some internal network. Two specific protocols denoting how to route are defined below:

  • RIP - Routing Information protocol
  • OSPF - Open shortest path first

RIP - Routing information protocol

[!Tip] Properties of RIP which are key traits / elements of RIP? #card

  • uses distance-vector-routing to create shortest paths and topology
  • we define a distance-metric that denotes the size of our network --> they are described by the amount of max-hops a packet can take ( so its limiting the size in a wave-like structure again, anything outside of them is not reachable and thus not contained in the routing structure) To obtain knowledge on routing across the domain: every 30 seconds we are advertising our routing tables
  • limiting the amount of routes to propagate to 25 --> we can see that RIP is limited in network-size !

Further we are taking a rough view on how RIP operates: how to denote an unreachable node? why use poison-reverse? #card In case no advertisement was received from a given neighbour after 180 seconds it will be labeled unreachable/dead. -> this labels routes to the neighbour invalid. Besides that each neighbour is propagating their advertisement whenever their routing table changes and also updates upon received advertisements. --> we use poison reverse to avoid Ping-Pong-Loops

While this protocol used DVR - a local approach - to establish a topology we will also gather some insight on OSPF which is utilizing link state routing for a global spread of topology.

OSPF - Open shortest path first

[!Tip] traits of OSPF? which idea is it utilizing? Which protocol does it use? possible issue? #card

  • uses link state routing to propagate a topology through the whole network (flooding)
  • each advertisement will contain a single entry per neighbour it has ( so information about all neighbours are flooded through participants)
  • those are send over TCP and thus authenticated Yet because we flood the whole network frequently we create a lot of control traffic for maintaining the topology constantly. This - with a large amount of hosts - can degrade or at least decrease performance drastically hence one might split a network into smaller portions to decrease the control traffic. Named hierarchical OSPF:
  • creates backbone network which contains boundary-router ( EG ) and backbone routers which connect to areas that were formed within the network.
  • Instead of one large network we may have 4 areas that are connected to the backbone via area boarder routers - and run the typical OSPF within their area ![[Pasted image 20240226202402.png]]

EGP - Exterior Gateway protocols

Whenever we leave an AS / internal network we may have to communicate with a backbone net or another AS. This communication may include the exchange of reachable ip-addresses that the other AS could use to route traffic accordingly. Now in case we have a large AS this routing-table can get huge and difficult to send hence we have protocols that are meant to deal with those large tables / exchanges. Namely this includes EGP and BGP.

However EGP is heavily outdated / old and thus not used anymore -> especially because it cannot handle large routing tables to exchange well.

BGP - Border Gateway Protocol

As replacement for EGP the BGP protocol was introduced and defined. Its the current standard for exchange between AS / EGs. ( an internal version iBGP also exists which is used to exchange traffic between two Border-gateways of an AS - remark the large tables send from a to b here )

[!Tip] Traits of BGP which concept is utilized? possible issues? #card BGP makes use of the Path-vector protocol which is a variation of DVR. Any given neighbour is receiving the whole routing table of this border-router and can then decide whether to take a given path - or multiple ones - based on its internal policy / cost calculation or other considerations. A possible problem with BGP might be its age and thus missing features that might be necessary for operation nowadays: for one its possible to false advertise a route - in order to gain traffic to another AS and i.e. filter or exploitiing it --

  • Generally there are security considerations missing and its prone to be misconfigured at times too:

end to end communication

ip forwarding


anchored to 143.00_anchor


contains some information about the idea and application of subnetting. Most which where not given during the lab itself.

Contains content from the following sources:

for a short / quick overview consider 143.07_subnet_cheat_sheet

a good webapp for calculating subnets with different traits might be found here

Concept / Idea

The idea of subnets is relatively easy in nature: We would like to create networks within networks to allow separation of hosts or similar. Further this can be used to split a given block into smaller portions, like --> which we may've been supplied. Consider that we would like to split this network into 12 subnets - for whatever reason - meaning that we would require 4bits of subnet-identifiers to denote a better split for this.

By applying those additional 4bits we are also shrinking the amount of possible hosts in this network from $2^{32-16}$ to $2^{32-16-4} = 2^{12}$ - yet we gain the chance to split them accordingly!.

To illustrate this idea further consider the following graphic:

Beside splitting the network into smaller ones, hosts can easily map out which devices they can reach / or not.

To signal a given subnet we can make use of CIDR - also denoted in 143.01_ip_subnets - which allows use to describe the used subnet-mask at the end of the given range / host-ip: --> Subnetmask =

to take the definition from freedcodecamp:

[!Definition] Subnetmasks Subnet masks function as a sort of filter for an IP address. With a subnet mask, devices can look at an IP address, and figure out which parts are the network bits and which are the host bits.

IP address192.168.0.10111000000.10101000.00000000.01100101
Subnet mask255.255.255.011111111.11111111.11111111.00000000

With the two laid out like this, it's easy to separate into network bits and host bits. Whenever a bit in a binary subnet mask is 1, then the same bit in a binary IP address is part of the network, not the host.

Since the octet 255 is 11111111 in binary, that whole octet in the IP address is part of the network. So the first three octets, 192.168.0, is the network portion of the IP address, and 101 is the host portion.

In other words, if the device at wants to communicate with another device, using the subnet mask it knows that anything with the IP address is on the same local network.

Another great resource showing why subnetting is good useful may be found here at cloudflare

They also come up with a good explanation / analogy for reasons of subnetting:

[!Quote] Motivation for Subnets - by Cloudflare Imagine Alice puts a letter in the mail that is addressed to Bob, who lives in the town right next to hers. For the letter to reach Bob as quickly as possible, it should be delivered right from Alice's post office to the post office in Bob's town, and then to Bob. If the letter is first sent to a post office hundreds of miles away, Alice's letter could take a lot longer to reach Bob.

Like the postal service, networks are more efficient when messages travel as directly as possible. When a network receives data packets from another network, it will sort and route those packets by subnet so that the packets do not take an inefficient route to their destination.

Subnetting with IPv6

The whole definition of subnetting with Ipv6 can be found here: RFC 2373

Subnetting with IPv6 follows the same principles as with IPv4, but with some simplifications. Because IPV6 is long - 128bits! - a subnetmask is not really feasible to use. Als alternative a new notation was introduced that indicates the size of a given subnet identifier (e.g. fd01::\64) Furthermore addresses like broadcast were swept - they are now replaced by v6 multicast - and the notation of a network was removed too: ( i.e the representation as was removed, allowing us to use as valid address - in ipv6 of course). With those additions we suddenly have more addresses available - although we have enough see here As example: The subnet fd01::\64 allows us to use all addresses in the range of fd01::0000:0000:0000:0000 - fd01::ffff:ffff:ffff:ffff for hosts!

Typically, non-aggregated subnets use a 64 bit subnet prefix. The rest of the address, bits 65 - 128, are used as an interface identifier. Interface identifiers are constructed according to the IEEE EUI-64 format. Most of the time, the interface identifier is either constructed from the MAC address of a network interface card, or it is generated randomly when SLAAC with privacy extensions is used.

Routing Loops Explained with Examples

source: anchored to 143.00_anchor proceeds from 143.05_ip_forwarding author: ComputerNetworkingNotes estimated reading time: 9–12 minutes

This tutorial explains routing loops in detail through examples. Learn what the routing loops are and how they are formed in a distance-vector routing protocol running network.

Distance-vector routing protocols use broadcast messages to learn and advertise network paths.

A router running a distance-vector routing protocol periodically sends broadcast messages out from all of its active interfaces. These broadcast messages include the complete routing table of the router.

When other routers running the same distance-vector routing protocol receive these broadcast messages, they learn new routes from the advertised routing table and add them to their routing table.

Through this process, all routers running the same distance-vector routing protocol learn all routes of the network.

Like any other type of routing protocol, distance-vector routing protocols also have some problems. Routing loops are the most common problem of distance-vector routing protocols.

What is a routing loop?

A routing loop is a confusion about the reachability of a destination network. Routing loops not only consume a lot of precious network bandwidth but also cause the router to believe that an inaccessible network is accessible.

What causes a routing loop?

Distance-vector routing protocols use the routing update timer to propagate routing updates. If the value of this timer is not the same on all routers, routing loops may occur. In other words, routing loops may occur when all routers do not broadcast routing updates simultaneously.

When a loop occurs, a router (call it A) thinks that the path to some destination (call it B) is available through its neighboring router (call it C), at the same time the neighboring router (B) thinks that the path to the same destination (C) is available through the first router (A). When a packet for the destination C arrives, it will loop endlessly between routers: A and B.

Let's understand this example in detail.

Routing loops example

The following figure illustrates a simple network. In this network, a destination network is directly connected to router C on its F0/0 interface. To ensure that the destination network always remains available, the administrator added an additional link between routers: A and B.

routing loop example

To enable IP routing, the administrator configured the RIP routing protocol. RIP is a distance-vector routing protocol and uses broadcast messages to learn and advertise network paths. RIP broadcasts routing updates every 30 seconds.

Now, suppose this network is powered off. To start this network, the administrator powered on all routers in the following order: C, A, and B. Since all routers are started at different times, their routing update timers are also running differently.

When the router C starts, it sends a broadcast message out from all of its active interfaces. This message indicates that the network is reachable through router C at the cost of one hop.

To learn how the RIP routing protocol works in detail through examples, you can check the previous parts of this article.

This tutorial is the fourth part of the article "How to configure RIP routing protocol explained with features and functions of the RIP protocol". The previous parts of this article are the following.

How RIP routing protocol works
This tutorial is the first part of the article. This part explains how the RIP routing protocol uses broadcast messages to exchange network paths' information.

RIP Routing Information Protocol Explained
This tutorial is the second part of the article. This part explains the concept of distance-vector routing and how the RIP routing protocol uses this concept.

Basic operation of RIP protocol
This tutorial is the third part of the article. This part explains RIP timers and differences between RIPv1 and RIPv2.

Both routers: A and B receive broadcast messages from router C on their interfaces: S0/0/0 and S0/0/0, respectively.

When a router receives a routing update, it learns the advertised routes and does the following.

  • If the advertised route is not available in the routing table, the router adds the advertised route to the routing table.
  • If the advertised route is available in the routing table, the router compares the metric of the advertised route with the metric of the route that is available in the routing table.
    • If the metric of the advertised route is worse, then the router ignores the advertised route and keeps the existing route.
    • If the metric of the advertised route is better, then the router replaces the existing route with the advertised route.
    • If the metric of the advertised route is equal, then the router adds the advertised route to the routing table along with the existing route. This feature is known as load balancing.

From the received routing update, both routers: A and B learn that the destination network is available through router C at a cost of 1 hop. Since the routing tables of both routers are empty, they both add this routing information to their routing tables.

The following image shows this process.

before routing loops first routing update

After router C, router A broadcasts its routing update. This routing update indicates that the network is reachable through router A at the cost of 2 hops. Both router B and C receive this update. But they do not add the advertised route in their routing tables, as they already have a better route for the destination.

The following image shows this process.

second routing update before routing loop

In the end, router B broadcasts its routing update. This routing update indicates that the network is reachable through router B at the cost of 2 hops. Both routers A and C receive this message and ignore it, as they already have a better route for the destination network.

The following image shows this process.

third routing update before routing loop

At this moment all routers have learned all routes of the network. This state of the network is called convergence. Routers do not stop broadcasting routing updates after getting the state of convergence. As long as the network is running, all routers continuously broadcast their routing tables when their periodic timer expires. This feature helps routers to learn any network change that occurs in the future.

Physical loops V/s Routing loops

Usually, physical loops do not cause much trouble for routing protocols. For example, our network works fine even it includes a physical loop.

To eliminate any possibility of forming routing loops due to the physical loops of the network, distance-vector routing protocols add only one best route for each destination in the routing table.

But, this feature does not prevent routing loops that are caused by differences between routing update timers. Let's understand it.

Suppose, the connection between router C and the destination network's switch fails.

Since the destination network is directly connected to router C, router C immediately detects this change and removes the entry that is associated with the destination network from the routing table. However, router C does not pass this information to routers: A and B until its routing update timer expires.

The following image shows this process.

first routing update after routing loop

Now suppose that the routing update timer of router A expires before the routing update timer of router C expires.

Router A broadcast its routing update. This routing update indicates that the network is reachable thorough the router A at the cost of 2 hops. Both routers: B and C receive this message.

Router B ignores this update message as it still has a better route for the destination. But, router C not only processes this message but also adds the advertised route to its routing table as currently its routing table has no route for the destination network.

The following image shows this process.

second routing update after routing loop

When the routing update timer of router C expires, the router C broadcasts its routing update. This routing update indicates that the destination network is reachable through router C at the cost of 3 hops.

Both routers A and B receive this message and ignore it as they both have a better route for the destination.

The following image shows this process.

third routing update after routing loop

When the routing update timer of router B expires, the router B broadcasts its routing update. This routing update indicates that the network is reachable through router B at the cost of 2 hops.

Both routers: A and C receive this message. Router A ignores this message as it already has a better route for the destination network. Router C adds the advertised route to its routing table because the advertised route and the existing route both have equal cost. Routers add equal-cost routes for load balancing.

The following figure shows this process.

convergence after routing loop

At this moment, the network is converged again. But, this convergence is false. The destination network is down but routers A and B think that the router C knows how to reach the destination network while the router C thinks that the routers A and B equally know how to reach the destination network. This misunderstanding creates a routing loop.

When routers A and B receive a packet for the destination network, they will forward that packet to router C. And the router C will forward that packet back router A. The packet will keep cycling between routers: A and C endlessly.

The following image shows how a packet received by router B gets stuck in a routing loop.

packets stuck in routing loop

This is a very simple example of a routing loop. Typically, routing loops are created because of confusion in the network related to the drawbacks of using periodic timers.

That's all for this part. The next part of this article covers the methods that a distance-vector protocol might implement to solve routing loop problems. If you like this tutorial, please don't forget to share it with friends through your favorite social network.

DNS - Domain Name System

anchored to [[143.00_anchor]]

Motivation for DNS:

IP-addresses for interacting with systems but it requires to be human-readable. For that we introduced DNS.

This will help us to link an ip-address to a given alias. Specifically we can introduce different layers of aliases - TLD etc. - to utilize multiple dns servers and splitting their responsibilities onto different scopes.

Former solution:

local host files that denote where to send something based on the given alias.

○ → cat /etc/hosts
# Standard host addresses  localhost
::1        localhost ip6-localhost ip6-loopback
ff02::1    ip6-allnodes
ff02::2    ip6-allrouters
# This host address  scattered14are05

Solution to manual host files | DNS

With DNS we deploy a name server that resolves domain names into ip-addresses.

endhosts will then use a DNS Resolver

[!Definition] Purpose of DNS Resolvers? #card

DNS-Resolver are software modules that implement the DNS Protocol that will resolve DNS names to Ip-addresses They are basically the dns server that implements the whole structure and serves conversion and similar.

Because resolving any possible domain would take a huge database - well and further huge query / answer times - we split up domains into smaller portions which can then be handled and answered by separate DNS-servers. -> this is called Hierarchical Domain Name Space

Hierarchical Domain Name Space

If we take a look at some domain like we can denote different parts denoted by dots.

If we take the string representation and read it backwards we can - usually - observe a pattern that describes like a "common" descriptor followed by a more specific descriptor --> getting narrow the further we traverse.

With that we can create a tree categorized into several layers: which? what are root servers, as-dns-servers? #card

  • the top most part of the tree is denoted by a root server --> which contains the information on how / where to map - reach - a given top-level domain --> contain a well-known ip address | there are around 6 root servers - with several mirror-servers
  • top level domains are the first childs of the root-tree --> they describe all the different domain endings like .com, .org, .de ..., usually they were described to represent a country yet many variations were introduced.
    • for example VeriSign maintains servers for .com TLD
    • DENIC maintains servers for .de TLD
  • after each top level domain follows a domain name that may be described or managed by a single AS --> like google..... or similar
  • after this domain-name the AS holding that name-server has the full control over the subdomains they may create and publish.
    • Meaning may be used and shared by my dns server if I wanted to ![[Pasted image 20240227000448.png]]

Authoritative DNS Servers:

[!Information] what do we denote with an authorative dns server? #card

With authoritative DNS servers we define name servers which are providing information about a given zone --> they hold specific information about the zone within their database and can share those on request -> whom to contact for question, covered domains or similar.

-> they are usually maintained by the organization or service provider of given zon

[!Definition] local caching name servers whats their purpose? where are they deployed #card

As the name suggests those are more locally placed name servers that can resolve requests to ips etc.

Importantly they are caching information about other name servers or mappings directly to speed up the process - and avoid requesting an often used mapping like mapping to an ip, whenever a person requests this domain --> avoiding unnecessary traffic and cutting down on requests to larger name servers.

  • usually any ISP / university -> provider with a larger network contains one to quickly serve requests and speed up connections.

<- they should not be confused with proxies that may cache websites -> they are usually accepting recursive queries and are performing iterative dns requests

And with the given fragmentation into smaller hierarchies we may tackle further issues that would arise with a single dns instance which ? #card

  • if it went down everything would fail
  • interfaces from / to this service would be a bottleneck / a lot of traffic would pile up
  • maintenance could kill the whole web - in case of errors --> highly critical!
  • with larger distance requests take longer -> mirroring is not considered in this structure

In this section we may also include the term of zones defined as follows:

[!Definition] Zone

If we observe a given domain and its connected sub tree we know that everything below may be owned / managed by a given server - ( well not for all like the TLDs which may be rented to specific corporations ) - so the system can keep the data for this subtreee -> its important to not confuse this part with a domain consider multiple domains belonging to a single zone for a example

DNS Request Types

Generally when making a DNS-request we ought to traverse multiple servers to gather our answer - after all we've established the idea of split name servers handling specific ranges / domains and such - and so there's two different approaches to query said information:

Recursive Queries

Denoting the following graphic we may gather intel on how this query works and what possible benefits / drawbacks could occur

name the drawbacks / advantages with recursive queries. Are they feasible? #card For one we put a certain burden on all name servers asked because they have to maintain a state for each request - whether it has been answered and where to send the answer! here the endhost is simply asking the DNS-resolver to perform a query, however after this step there are basically no real recursive queries:

  • root servers disallow those requests - load would be too high
  • TLD servers disallow those too - same reasoning
  • most authoritative name servers are not either -> same as before So in reality they are not really used because this state management combined with many requests would create high loads and responsibilities that those servers are not taking.

Considering that a recursive quest is likely the easiest for a client - they only have to ask for it and all machines on the way will manage and do their thing to respond with a correct mapping - but the worst for all other participants we we can consider the more real-world approach of iterative requests

Iterative DNS-Queries

With given figure we would like to establish the idea and structure of an iterative query: name the advantages, for both servers and endhosts requesting a mapping #card As with recursive queries we have the endhost asking some local dns server which receives the requests and is now asked to return a valid mapping. However they are sequentially asking about the address resolution -> i.e. first asking where to resolve TLD-names where is ? .de then gathering an address / server to ask about a domain, further about subdomains and so on. Each query is send from the local dns server to the corresponding DNS-Server whos answering with the next server to ask / or answering with a mapping. --> we don't have any state management left because the local dns server is simply asking about mappings at each instance and will return the value once resolved.

Deciding whether to use recursive / iterative Requests

Above it was observable that both types of request have their advantages/disadvantages. Specifically recursive requests may be favored by clients because they only request and get an answer without much effort put into it by them - they are only requesting and retrieving - while recursive requests can be a higher load for all name-servers involved - due to state management and such. Furthermore we've observed that root and TLD servers usually deny recursive requests for mentioned reason of load.

[!Tip] How would we allow/disallow a query to be recursive/iterative? #card

as described and defined in RFC 1034 The use of recursive mode is limited to cases where both the client and the name server agree to its use. The agreement is negotiated through the use of two bits in query and response messages:

  • The recursion available, or RA bit, is set or cleared by a name server in all responses. The bit is true if the name server is willing to provide recursive service for the client, regardless of whether the client requested recursive service. -> That is, RA signals availability rather than use.

  • Queries contain a bit called recursion desired or RD.
    This bit specifies specifies whether the requester wants recursive service for this query. Clients may request recursive service from any name server, though they should depend upon receiving it only from servers which have previously sent an RA, or servers which have agreed to provide service through private agreement or some other means outside of the DNS protocol.

How (when) Recursive requests are executed

Taken from the official RFC 1034 we can denote and define the circumstance required to allow / execute a recursive request:

The recursive mode occurs when a query with RD set arrives at a server which is willing to provide recursive service; the client can verify that recursive mode was used by checking that both RA and RD are set in the reply. Note that the name server should never perform recursive service unless asked via RD, since this interferes with trouble shooting of name servers and their databases. 7 If recursive service is requested and available, the recursive response to a query will be one of the following:

  • The answer to the query, possibly preface by one or more CNAME RRs that specify aliases encountered on the way to an answer.
  • A name error indicating that the name does not exist. This may include CNAME RRs that indicate that the original query name was an alias for a name which does not exist.
  • A temporary error indication.

If recursive service is not requested or is not available, the non- recursive response will be one of the following:

  • An authoritative name error indicating that the name does not exist.
  • A temporary error indication.
  • Some combination of: RRs that answer the question, together with an indication whether the data comes from a zone or is cached. A referral to name servers which have zones which are closer ancestors to the name than the server sending the reply.
  • RRs that the name server thinks will prove useful to the requester.

package structure of DNS

We may define the following properties for DNS-Packets below:

[!Definition] DNS packets what does it use, structure of a dns header / packet? #card

DNS is queried over UDP ( 53 ) because we don't require any connection-setup --> sending requests faster

any DNS-header contains a Query-ID the amount of fields - either query or answers - and some additional control information provided. Any Query contains a Record-Type and DNS-Names to resolve

Revisiting RR - Resource Records

DNS servers can contain different resource records for a given domain / mapping that may indicate what it points to / describes: Those may included: which common RR do we use (omitting DNSSEC) #card

  • A -> IPv4
  • AAAA -> IPv6
  • NS -> Name server
  • MX - > Mail Exchanger - incoming mail-server for the given domain
  • CNAME -> alias name - maybe for a given ip or such
  • PRT -> Pointer which binds a name for an IP-address for example
  • TXT -> text contained
  • SRV -> name and port number of a server that is responsible for a given server - requested Further extensions for security are denoted in netsec_LayerSec_Transport

Reverse Lookups

With DNS we have a mapping of domain-name -> IP-address but what if we are given a public ip-address and would like to map it to some domain?

With Reverse-Lookups we can achieve this reverse mapping by utilizing an additional extension / tree of DNS which, what does it require to successfully map a request? #card

[!Definition] arpa in-addr

DNS provides a subtree called that allows to map an ip-address to a given name server. here the root is denoted by arpa followed by in-addr.

Next each entry contains a block of an ip-address, like "200,100" or similar.

By looking at the requested ip-address in reverse, so 10, 145,0,10 we an then traverse through this tree until we reach a given leaf.

This will contain the domain linked. DNS queries of this structure utilize the PRT-record and contain the ipaddress in reverse Works for both Ipv4/6 obviously

With the given structure we could query a given address and receive an answer as follows: IPv4: -> query: --> yields as example

Difference between Zone / Domain

one possible explanation from Stackoverflow: link

Domain name servers store information about part of the domain name space called a zone. The name server is authoritative for a particular zone. A single name server can be authoritative for many zones.

Understanding the difference between a zone and a domain is sometimes confusing. A zone is simply a portion of a domain. For example, the Domain may contain all of the data for, and However, the zone contains only information for and references to the authoritative name servers for the subdomains.

The zone can contain the data for subdomains of if they have not been delegated to another server. For example, may manage its own delegated zone. may be managed by the parent,

If there are no subdomains, then the zone and domain are essentially the same. In this case the zone contains all data for the domain.

I try to clarify the difference between zones and domains a little more for myself below:

If we declare a domain we can also declare the domain it spans - like a TLD or similar - as its authoritative zone.

Well its not quite right because within this subtree - the domain space - it could still be possible that another subdomain within this domainspace may span its own zone. Zones hereby define areas of a domain where an entity - corporation, subsidiary of one or whatsoever - holds the administrative right to maintain, change / modify it. The given example above - showing the difference between Marketing.Microsoft / Development.Microsoft - helps t o show / signal that a single domain is not entirely equal to some domain. It might be split down into smaller portions that have their own administration within the spanned tree. Like everything with / ... etc belongs to the authority of and is administrated by them too.

Why DNS uses UDP instead of TCP

When first designed and established DNS was used to form a single response between two hosts - querying RR from their name-server - and thus the answers provided could be stuck into the 512 Bytes packets that are possible with UDP. --> However if the answer ( with many RR in the response for example) are relatively large DNS will use TCP too to reliably send this content to them!

Furthermore DNS is somewhat time sensitive hence a fast-response is favored opposed to a reliable response --> we don't have to establish / send / close a connection ( like with TCP ) but can simply send our response and maybe resend it in case of failure. It would be a substantial overhead to use TCP because of its setup ( compared to the actual data sent). Besides these aspects its also easier to embed anycast by using UDP - not having to ensure that our request ( reliable transfer with TCP ) is always sent to the same server.

-> We also don't require the features like flow/congestion-control considering that we are sending tiny packets anyway/

[!Tip] Big issue with UDP Now while we have the mentioned benefits UDP comes with the problem of not really being secure in its core. Spoofing / sniffing --> we can easily observe a simple DNS request without any modification There are ways to aviod this netsec_LayerSec_Application like using DNSSEC, DoT, DoH or similar ( although they then just run over TCP too )

Further Resources :

  • online book helping to learn / understand / work with DNS: source to zytrax
  • General concepts and terminologies for DNS-servers digitalOcean

Mail Services

anchored to [[143.00_anchor]]

denotes the fourth lab-day with information / practical examples for working / managing mail-servers.


establishing communication over several networks can be accomplished by using different systems / protocols like Matrix, XMPP and further yet one of the most fundamental and important communication structures involves e-mails.

Communication via e-mails requires the use of DNS systems because we ought to be able to denote/point toward a server to handle the mails for the given domain. This is done by MX-Records that point toward the server being responsible for handling mails in said domain.

( denote 143.09_DNS for further information)

To understand their structure / process of exchanging / communicating we ought to grasp the hierarchy / actors involved in emails.

Considering a simple communication between two clients we traverse several stages / servers which actors do we have with emails? #card As seen above we have two actors per side:

  • MUA - Mail user agents, which are the interface used by the client to write,edit,read mails
    • they query changes / new messages from the mail-server via IMAP / POP3 -> stateful/stateless
    • also pushing messages written to the Mail-server which will process / enqueue and send them
  • MTA - Mail transfer agent, that acts as instance to store/send/forward messages both incoming / outgoing
    • they store messages per user -> which can then Query with a MUA
    • they queue messages going outwards -> in case delivery failures occur
    • with DNS they are querying on how / where to send messages to

Further we introduces some terminology like:

MDA - Mail Delivery Agents _whats their purpose? #card

  • those are used as mail filter
  • examples included procmail,sieve,maildrop

MX - Mail Exchanger whats their purpose? #card

  • they accept messages for local recipients
  • hence we have Mx-RR in DNS!

For some additional information on mail transfer agents see this entry.

Open Relay Servers: whats their objective #card

  • they act as relays that accept messages for non-local recipients from some unauthenticated senders --> further forwarding these messages to the a given destination
  • because they accept and forward from unauthenticated senders they acted as great transport mechanism of spam

As an evolution of Open relays may server Smart Hosts whats their purpose? #card

  • those also denote relays that accept and forward messages from certain clients --> however it requires authentication or filters based on specific address-ranges --> preventing spam from random actors.

Interworking of MX ( DNS ) and Mail

As observed in 143.09_DNS we have a RR denoting the address of a given Mail-server for a domain. With this knowledge we can combine and describe the exchange of mails across networks with the usage of domain-identifiers instead of ip-addresses: how would we send a given mail to some service ? #card The following figure may show the possible path to take and send a message:

Structure of Mails

Its obvious yet its important to denote and mention the structure of an email-address:


we split into two parts:

  • a local part ( everything before "@")
  • a domain part ( everything after the "@")

[!Definition] parts of internet e-mails we have three, which? #card there are three parts that are contained with every e-mail address. Envelope ( defined by SMTP and solely used by it to process mails). -> contain the recipients and senders address. -> only used during mail transfer

Header ( actual data belonging to the mail itself) -> contains several information for the mail ( metadata, recipient and such)

Body ( contains the actual information that are sent) -> with MIME we can further add data of all types, as well as encryption as denoted here :Securing e-mails

Further information may be found here

The header is subdivided into fields containing key-value pairs.

There are many different ones listed here however below are the most important ones:

  • The date representing the point of time when the e-mail was sent, in the “Date:"-field
  • The sender who claims to be the author of the e-mail, in the “From:"-field
  • The destination to which the e-mail is send to, in the “To:"-field
  • The subject addressed in the message, in the “Subject:"-field

A possible e-mail header could be:

Date: Wed, 16 Jul 2014 10:43:42 +0200 (CEST)
From: Bob Foo
To: Alice Bar
Subject: Measures against Spam

We ought to define and observe the structure of the mail-format. As per RFC 5322 we have the following requirements: what are requirements for the mail header? #card

  • Header fields are key:value pairs
  • a key starts at the beginning of a line and ends with a colon and space
  • the rest of the line - after the key - is defined as the linked value
  • line starting with white space are continuations of the previous header field --> in case we would like to transfer multi-line information
  • headers may included the following:
    • FROM:
    • TO:
    • SUBJECT:
    • DATE:
    • Cc:

And further the body of an email requires a structure too: what is it encoded in, extending for images and more? #card

  • the messages are encoded in ASCII --> american system was used as standard thus using ASCII and nothing better like UTF-8 or such
  • MIME - Multipurpose Internet Mail Extensions allows for different encoding, images or other formats to be appended
    • also supports encryption
  • Mails are not encrypted by default!

SMTP - Simple Mail Transfer Protocol

To send messages across MTAs we required some sort of standard denoting packet structures. This is defined by SMTP in RFC 5321 what protocol does it utilize? phases of sending messages? commands? #card

  • Its based on TCP -> requires error correction and connections between host<->recipient (port 25)
  • requires 3 phases of dialogue
    • handshake - opening connection
    • transfer of message
    • closing connection
  • During connection any command is sent as ASCII
  • with every request comes a response with statuscodes and additional information

Mail envelopes

there may exists a difference in the addressed destination and the actual destination of a given mail. We ought to accomplish this idea with SMTP as well how? what are examples denoting these two different values? #card in the context of SMTP we are also calling addresses "envelopes" usually the envelope-sender matches the From: header and envelope-receiver matches the "To:"-header attached in a mail. however those can also differ:

  • mail redirections or simple mailing lists have a "To:" header that will not change while the envelope-receiver will with every message
  • blind carbon copies - Bcc - also don't include the envelope-recipient in the "To:"-header.

Dialog of SMTP

Consider that we would like to exchange messages between two mail server: We ought to establish a certain standard to make communication universal for all potential servers. For that we can observe specific dialogue-options: we describe 6 different ones, what's their purpose / what are they named? #card

  • HELO / ( EHLO for ESMTP)
    • greets client, takes FQDN of the client as argument
  • MAIL FROM: -> denotes envelope sender
  • RCPT TO: -> envelope recipient
  • DATA -> denotes the message as payload
  • "." -> denotes end of message payload
  • QUIT -> terminates SMTP session

Besides these message we always gather a response - as mentioned before - which denotes a status code define the 3 categories of them #card They are pretty similar to HTTP-Statuscodes! with 2xx: we denote success 4xx: notifies about temporary errors -> Quote exceeded, mail in queue or similar 5xx: notifies about permanent errors --> denial by policy, unkown users or similar.

example interaction might look like the following:

S: 220
S: 250 Hello, pleased to meet you
S: 250 Sender ok
C: RCPT TO: <>
S: 250 ... Recipient ok
S: 354 Enter mail, end with "." on a line by itself
C: From: Alice <>
C: Subject: Fast Food
C: Do you like ketchup?
C: . (a single dot indicates the end of the message)
S: 250 Message accepted for delivery
S: 221 closing connection

Extensions of SMTP

Security Extensions

Authentication with SASL: as defined in RFC4954 we can add means to authenticate users by utilizing / adding SASL - Simple Authentication and Security Layer - the whole process of mails.

source for explanation of sasl ( iana )

[!Quote] Description of SASL by iana

The Simple Authentication and Security Layer (SASL) [RFC4422] is a method for adding authentication support to connection-based protocols.

To use this specification, a protocol includes a command for identifying and authenticating a user to a server and for optionally negotiating a security layer for subsequent protocol interactions.
The command has a required argument identifying a SASL mechanism.

MTA-STS | Strict Transport Security

With MTA-STS we can easily signal whether a given provider is able to use TLS for SMTP or not by adding a separate entry into the DNS-Records. They can then be queried upon connection and thus enable / disble TLS and what to do if its not possible with host.

source for this extensions found here: RFC8461

DSN - Delivery Status Notifications

We ought to signal the status of a request / process somehow to enable us to act accordingly to resolve potential problems or being notified about them at al.. For that we use DSN and introduce the idea of Bounce whats meant with them? What are they signaling? #card Bounces may also be known as mailer daemon messages. They provide information like: non-delivery notifications, delayed delivery notifications. -> their defined destination is denoted by the envelope-sender for example if a message by an MTA cannot be delivered - after being queued

[!Tip] Interaction with DSN service Because a DSN is a single-sided status update they don't accept / expect a return --> .leaving the return-path empty so "MAIL FROM: " is left empty!

Now due to this construction we may encounter late bounces as potential problem: whats meant / defined by late bounce? #card Consider that a MTA notices that a message cannot be delivered after it accepted it - enqueued to send. -> this might be the case if a redirection or user defined filters are used Now there is no guarantee that the return-path is correct -> here the delivery status notification may be sent to the wrong sender --> as the information in the envelope sender is not sepciic enough / wrong. This is called BACKSCATTER

[!Tip] Filtering before submitting to resolve late bounces we ought to run the filters during the SMTP-dialogue to prevent the message from being submitted - and having the error occur afterwards!

Spam Prevention:

SFP - Sender Policy Framework

taken from here follows a short introduction / concept of SFP: Briefly, the design intent of the SPF resource record (RR) is to allow a receiving MTA (Message Transfer Agent) to interrogate the Name Server (DNS) of the domain which appears in the email (the sender) and determine if the originating IP of the mail (the source) is authorized to send mail for the sender's domain. The mail sender is required to publish an SPF TXT RR (documented here) in the DNS zone file for their domain but this is transparent to the sending MTA. That is, the sending MTA does not use the sending domain's SPF RR(s) but the receiving domain's MTA will interrogate and use the sending domain's SPF RR(s).

The SPF information must be defined using a standard TXT resource record (RR).

SPF - Sender Policy Framework what is it build upon? #card The idea is to set-up / use a reverse mx-record.

  • within those we are storing the sender-policies to either reject / accept certain ranges or similar.
  • those policies are stored in a TXT RR Possible examples may include: IN TXT "v=spf1 ~all” IN TXT “v=spf1 a mx –all”

What can we gain from this approach? #card With these policies being stored as DNS-RR we can prevent unauthorized servers from abusing a domain as sender address

  • reducing spam load for all services that accept the policy
  • reducing the amount of backscatter load for those who are publishing the policy --> DSN that are directed to the domain and do not qualify to pass the policy are dropped!
  • However this could also prevent legitimate mail forwarding/redirections where the servers are unaware of SFP - they don't read it and are rejected or such


Another approach to reduce the amount of spam for mail-servers is introduced by utilizing "Greylisting" idea of this approach, required information to store? #card Usually spam-sending services are utilizing the idea of "fire-and-forget" ( shotgun) approaches instead of some MTA with queues and all --> too expensive / much effort Hence one could easily reject messages after their first delivery attempt --> i.e signaling 4xx error -> temporary!

  • a combination of the envelope sender, recipient address, ip-address are then stored in a database together with a timestamp (triplet)
  • if the message is sent again --> matching the entries in the database, we may allow it and add them to a whitelist
    • only after the timer expired ( that was set internally)
  • --> this works because authentic mail servers resend a temporarily failed message delivery after a given time ( ~ 5 - 10 minutes) which the receiving server tracks and then grants to be authentic in the end

Issues with greylisting? #card

  • because we disallow once and allow afterwards we essential slow down mail delivery
  • if the sender is aware of this idea they can send twice too - lol.
  • In case an open-relay / mail server or freemail provider are abused / hacked they will work with real MTAs and thus bypass this prevention.


anchored to [[143.00_anchor]] last exercise of this lab focusing on the transport layer ( 4 ), especially features of TCP and UDP


Both protocols find themselves in the Transport-Layer ( 4 of TCP/IP-model ) and are necessary to send application data across networks - encapsulating the payload of all layers above within its Payload. Both are "going two ways" while TCP is reliable, connection oriented and comes with features like flow / congestion control, error correction and some additional ones.

On the other hand UDP is the unreliable counterpart which primarily allows transmission of large - time sensitive - packets that can be lost on the way - so no connection-oriented system - without any large setup making it easy to send data fast with the risk of losing packets and such. Some trick to make this reliable again would be to simply incorporate mechanisms to check the data within the udp-packet and establish a routine to resend packets and such --> done by QUIC for example

UDP - User Datagram Protocol

Definition to be found at RFC 768

[!Definition] Definition by RFC

This User Datagram Protocol (UDP) is defined to make available a datagram mode of packet-switched computer communication in the environment of an interconnected set of computer networks.

This protocol assumes that the Internet Protocol (IP) 1 is used as the underlying protocol.

This protocol provides a procedure for application programs to send messages to other programs with a minimum of protocol mechanism. The protocol is transaction oriented, and delivery and duplicate protection are not guaranteed. Applications requiring ordered reliable delivery of streams of data should use the Transmission Control Protocol (TCP) 2.

Structure of UDP Packets

Below is the rough overview of the usual UDP-packet. The most important parts that define the whole header are

  • Source-Port ( port used by the sending instance ) ( its optional!)
  • Destination-Port ( port used at receiving end )
  • Length - length of the appended datagram - of this packet
  • Checksum to possibly detect error of the header ( not used to provide any sort of error correction!) There are more information to be found at the linked RFC above but those are the most prominent aspects.

[!Tip] What we can observe here what is making UDP special/different to TCP? #card In its structure UDP is really barebone with close to now features to enable / allow for error correction or similar Because its so small this packet is great for sending large information fast with the caveet of no reliable transport ( no guarantee of any arrival nor the correct information sent) --> Hence it cannot be used to simply send large packets around like files --> it would be awful due to all the possible transmission errors, missing packets ( invalid fragments) etc.

However because this is not using any connection-oriented principle its fast to send out data without having to establish the means to communicate reliable. --> VoIP / Video-Streams ( or DNS with small queries and time sensitive operations) make up some good examples for the usage of UDP ( or QUIC by Google )

[!Warning] UDP is not caring about fair transmission Meaning that UDP is not using any form of Fairness control to establish a certain fairness between other participants sending data over one connection. --> TCP on the other hand tries to care about this by regulating its transmission amount based on implicit feedback by the network ( latency / congestion and such )

UDP is just blasting without caring much!

  0      7 8     15 16    23 24    31
 |     Source      |   Destination   |
 |      Port       |      Port       |
 |                 |                 |
 |     Length      |    Checksum     |
 |          data octets ...
 +---------------- ...

	  User Datagram Header Format

[!Information] Size of UDP-packets whats the allowed size of a udp packet? what makes up the header-size? #card ( I was a little stupid figuring this out, this message / answer gave some good information too, otherwise just take a look at the corresponding RFC! ) link to stackoverflow

The header is defined with 8bytes. Within 2 bytes are allocated for the size of the udp-packet Hence we can have payload of size: $2^{16-1}$ Bytes

Limits of UDP

We may as well cover certain limitations of UDP packets ( some were mentioned previously already). what are possible limitations of UDP? #card

As mentioned previously we don't have much of an option to correct errors in transmission. Furthermore we also lack the ability to trace / find out whether a packet was lost or not --> we are not keeping track of segments and whether they arrived or not! -> reordering segment also remains undetected here Theres the possible risk of overflowing the receiver by sending them too much information and thus killing their Buffer

The total size of the UDP-Segments depends on the Ip-packet that UDP is encapsulated in.

Benefits of UDP

We may as well discuss have a short summary of possible benefits with UDP which can we denote? #card

  • we can detect transmission errors --> the checksum! ( although not being able to resolve them without help by the user application)
  • Theres no setup/ management of connections hence we don't have to maintain this state for both sender and receiver
  • the header is small --> less overhead of data, especially if we are sending barely anything anyway ( like with easy DNS requests!)
  • because there's no congestion control we can send much data at once --> throughput is not really limited by anything because we just send data

TCP - Transmission Control Protocol

TCP is pretty much a counterpart to UDP regarding many features and traits. Definition of TCP can be found in the following RFCs: first definition: RFC 793 updated version: RFC 9293 From the updated version (part 2.2) the key concepts for TCP are:

  • TCP provides a reliable, in-order, byte-stream service to applications.
  • The application byte-stream is conveyed over the network via TCP segments, with each TCP segment sent as an Internet Protocol (IP) datagram.T
  • CP reliability consists of detecting packet losses (via sequence numbers) and errors (via per-segment checksums), as well as correction via retransmission.
  • TCP supports unicast delivery of data.
  • There are anycast applications that can successfully use TCP without modifications, though there is some risk of instability due to changes of lower-layer forwarding behavior 46.
  • TCP is connection oriented, though it does not inherently include a liveness detection capability.
  • Data flow is supported bidirectionally over TCP connections, though applications are free to send data only unidirectionally, if they so choose.
  • TCP uses port numbers to identify application services and to multiplex distinct flows between hosts.A more detailed description of TCP features compared to other transport protocols can be found in Section 3.1 of 52.
  • Further description of the motivations for developing TCP and its role in the Internet protocol stack can be found in Section 2 of [16] and earlier versions of the TCP specification.

Structure of TCP HEADER

As with UDP we have a header that defines important information about the payload conveyed and the packet itself - also giving context to other packets ( i.e whenever we are fragmenting packets for example )

    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   |          Source Port          |       Destination Port        |
   |                        Sequence Number                        |
   |                    Acknowledgment Number                      |
   |  Data |       |C|E|U|A|P|R|S|F|                               |
   | Offset| Rsrvd |W|C|R|C|S|S|Y|I|            Window             |
   |       |       |R|E|G|K|H|T|N|N|                               |
   |           Checksum            |         Urgent Pointer        |
   |                           [Options]                           |
   |                                                               :
   :                             Data                              :
   :                                                               |

          Note that one tick mark represents one bit position.

As seen TCPs header is way larger than the UDP counterpart primarily because it is implementing and supporting more features which require additional information ( Sequences Number to correctly map some ). The most important might be the following entries:

  • Source Port / Destination Port - same principle as with UDP!
  • Sequence Numbers -> denotes the first octet of data for this segment ( so if we send something large we may have to split it and with this information we can then reassemble them in the correct order ) ( also important if we gather packets in different time periods which led them to be unsorted )
  • acknowledgement number -> (only active when ACK-control bit set) describes the next sequence number for the sender to receive --> if we sent some data with a sequence number ( + the total amount of data sent there ) we end up at a "new position within the whole dataset" ( because we now traversed and received more data thus the sequence-numebr will then have to describe a new portion of data we don't know about yet!)
  • Window(size) -> this is a field necessary for proper flow control, it denotes the amount data the receiver is capable / willing to receive at a time ( like denoting whether it can handle a lot of data at once ( or is heavily buffering and thus needs a smaller limit))
  • Control Bits / Flags -> Here we have plenty 1bit large flags that can denote different modes / properties:
    • ACK -> Whether the packet is an acknowledgement
    • RST -> can be used to reset the connection ( remember its a connection-oriented transmission )
    • SYN -> whether the packet is part of the SYN progress
    • FIN -> if set it describes that no more data will be sent

further information, the complete list of flags and all is to be found in the corresponding RFC 9293

Connection Set-Up with TCP

As mentioned before TCP is a connection-oriented protocol that establishes and maintains a connection between two hosts to share data.

For that we have to establish a certain set to create and signal this connection -> both participants should be aware whether the connection is actually established or not ( get an ack for their attempt to connect ).

[!Tip] 3 Way Handshake By creating a 3-way handshake / message exchange we can successfully setup a communication between two hosts.


  1. The given Client ( party that wants to establish the connetion ) is sending a SYN-flagged Request which is containing its initial Sequence-Number
  2. The receiving end now ought to accept the received and send its own solicitation to gather intel of the connection status later. For that it will send a SYN-flagged Request that contains its initial sequence-number ( it must be different!) and also sets the ACK-Bit + the next sequence-number from the sending part ( so incremented by 1 )
  3. Client 1 is now aware of its successful connection yet ought to signal this to Client 2 too. Hence it will send its current sequence-number and an ACK containing the next sequence number for Client 2 ( also incrementing by 1 ) After this handshake the tcp connection is alive and can exchange data and such.

Setting Sequence Numbers: For security reasons the sequence numbers( or better their initial start) are random and should not be predictable to prevent spoofing messages by guessing the correct sequence numbers -> link to stackoverflow

Now its also necessary to close a connection - so that both parties are aware that the transmission has ended.

This is done in the following way:

  1. Client 1 sends a FIN-tagged Request
  2. Client 2 responds with an ACK-tagged Request
  3. Client 2 sends a FIN-tagged Request
  4. Client 1 responds with an ACK-tagged Request After this procedure both parties have gathered intel that their connection and the connection from their peers was closed - and also signaled that to each other.

TCP Congestion Control

Amongst connection-oriented transmission with error correction and some other properties we may observe the idea of Congestion Control deployed by TCP.

Whats the idea of Cognestion control, why can't we just send more at once? #card Congestion Control ( in networks ) is describing the effect of reduced quality of transmission within a network - reduced goodput - due to issues like packet loss, or some device not capable of handling a certain load. Now if the throughput decreases within a network - specifically in a transmission between hosts - we may want to compensate for this loss of performance. Some ideas / options to compensate could be the increase of sent packets by increasing the retransmissions of lost or corrupted packets. In return this could further congest the whole system and degrade its performance or kill it too.

WITH TCP there's a mechanism that tries to maximize throughput while also trying to reduce congestion caused by this idea. This is done by using mechanisms / algorithms that try to find a correlation of the implicit effects of congestion/throughput and actions to take. --> We focus on implicit effects because its unusual that a system would be communicating its current congestion ( after all that could be delayed or lost too declaring it useless -> we cant really improve if the status report is missing or delayed ) and thus its better to implicitly listen and find cues to sense the current congestion. TCP can do this by measuring delays in acknowledgements ( answers from a sent packet ) and its linked retransmission-timer --> ( this is primarily for sensing delays! )

And to detect packet-loss it deploys the usage of the sequence-numbers --> after all each response should denote the currently expected sequence number <-- because those are ordered and if one is missing the receiving part would send 3 wrong duplicate sequence numbers in response because of the missing packet lost.

[!Tip] Goal of Congestion Control

With those cues we can somewhat establish a congestion control that will try to maximize throughput while also responding to possible congestion caused by it ( or other connections on the same network ).

Furthermore a certain fairness amongst other connections is deployed / used or at least attempted. We don't want to steal all the network-capabilities for ourselves if other transmissions are happening too.

Establishing a good transmission speed

As mentioned above its crucial to somehow find the best throughput for our TCP-connection to send data fast and utilizing the whole bandwidth available. We could just send the max we suspect yet this is not predictable at all hence we ought to deploy the idea of probing.

[!Definition] Probing to find the best window size why probe, how does AIMD work? #card

By gradually probing a window size ( the size of segments to send per packet) we slowly grow in our bandwidth while not directly killing the network with the maximum. Further we can somewhat gather responses from the network - and its dynamics - by probing to find the correct speed.

A famous example for this idea is AIMD - Additive increase / multiplicative decrease. Here we grow our window size with a given addend and then increase / continue doing that until we encounter congestion. If detected we will then drop the window size by a given factor ( so divide it by X ) to reduce the amount of congestion again. ( observing this in a graph will show a sawtooth-chain graph which is somewhat cool? )

There are plenty of algorithms that are trying to calculate this behavior while growing faster/ reacting better to congestion.

One such example would be TCP CUBIC: what are traits of CUBIC? #card

  • successor of BIC-TCP
  • used for long fat networks
  • default in MacOS / Windows / Linux
  • fair because it grows independently on the RTT
  • window size depends on the previous congestion event only
  • CUBIC spends a lot of time at a plateau between the concave and convex growth region which allows the network to stabilize before CUBIC begins looking for more bandwidth.

Information are to be found on the paper comparing / analyzing it link to paper I also logged it here locally 211.14_TCP_CUBIC_analysis

Some resources regarding congestion:


Unfairness of TCP with in large RTT context

This was a question on stackoverflow which was asking about the reasons of TCP being unfair with high RTT:

Versions of TCP that use Van Jacobson algorithm are unfair in some contexts, such as satellite communication. I cannot understand why. Is this problem caused by asymmetric links, in which the receiver has more possibility to send acknowledgement packets than the sender?

Some answer provided:

After some researches I have found an answer. It is not only for the delay-bandwidth product as suggested in the comment but for some reasons:

  • The throughput of a sender could be written as Throughput=CongestionWindow/RoundTripTime, so if you have a bigger RTT you need bigger CW to reach the same throughput;
  • The capacity of the channel could be written as Capacity=Delay*Bandwidth, so you could retrieve the bandwidth available in this way Bandwidth=Capacity/Delay (and Delay could be the half of the RTT or equal the RTT considering that for each packet the ACK is needed);
  • The CongestionWindow could be written as function of the RTT in that way:
    • CongestionWindow = 2 ^ (t/RTT) in slow start phase, where t is the time;
    • CongestionWindow = ss + (t - tss)/RTT in congestion avoidance phase, where ss is the slow start threshold, t is the time, tss is the time when you reach the slow start threshold and ss is the value of the slow start threshold.
      Avoiding to make formulas more complicate because of the possible error that can occur that change the CongestionWindow and th slow start threshold, it could be easly seen that the CongestionWindow strongly depends on the RTT, and since it appears on the denominator both in slow start phase and in congestion avoidance phase, bigger the RTT is, more disadvantaged the sender is.

P2P through NAT

NATs are primarily deployed to compensate for the decreasing amount of public Ipv4 addresses - well and to further delay rise and replacement by Ipv6 ?? - and are basically deploying a private-subnet from perspective of an ISP towards a given area / region. This region is then only handled by the router spanning and maintaining this network ( well and some redundancy deployed too ) and these outgoing-points have a real public address which all hosts within this net use to communicate with the world.

Traversing from "the internet" to the "internal network" requires to modify packets because outward clients / systems are not aware of the ip-address of the client they are talking to ( because the router is translating from outside + specific port to _internal with given port)

This - as one might be able to observe - kills the idea of P2P connections because its not possible for the clients to connect seemingly without some machine interrupting ( and translating the packets and such).

In come some ideas to traverse a NAT and allow p2p connections - with possible translations in mind! - like STUN or simple RELAYING Servers.

STUN - Session Traversal Utilities for NAT

see the original RFC 5389 for more information.

As per definition by the linked RFC:

Session Traversal Utilities for NAT (STUN) is a protocol that serves as a tool for other protocols in dealing with Network Address Translator (NAT) traversal.

It can be used by an endpoint to determine the IP address and port allocated to it by a NAT.

It can also be used to check connectivity between two endpoints, and as a keep-alive protocol to maintain NAT bindings.

STUN works with many existing NATs, and does not require any special behavior from them.

STUN is not a NAT traversal solution by itself. Rather, it is a tool to be used in the context of a NAT traversal solution.

STUN - in its concept - is relatively simple: hows nat established #card

We use the concept that we have two different hosts - there are different combinations possible: one behind NAT, the other not; both behind NAT; both behind multiple NATS ... - and further a server that is publicly available ( not within the / a nat!).

Now we know that traversing from within the NAT to the internet will change the source Ip from our sending-host. If we now establish a server ( STUN-Server) that we can connect to they will obtain both the public address( in front of the the NAT) ( and its port) and also the private address ( within the NAT ). We further tell the STUN-Server who we want to talk to. Now the Stun-Server will check the information on the requested host and send the asking host the information to reach the desired host by supplying them the private and public address ( that was previously communicated / exchanged). It also sends the desired client a message containing the host information of the requesting-client. Both parties will now try to establish a connection over the two given addresses - the NAT'ted and the public address ( leading to the NAT basically). One of the connections will come through - which one depends on the underlying structure - and thus enable both clients to communicate with each other directly with a NAT - or multiple ones - in-between translating ip-packets.

Hole Punching ( with STUN)

theres a really good paper explaining all the ideas of achieving a good NAT traversing P2P-connection here: website link

I've also logged this website here to preserve - and easily access it locally 143.19_p2p_nat_udp_hole_punching


The following section was mostly taken from the lab - hereby the origin is from the "Lehrstuhl Kommunikationsnetze Universität Tübingen":


Data link layers generally impose an upper bound on the length of a frame, and, thereby, on the length of an IP datagram that can be encapsulated in one frame. If the size of an IP datagram exceeds the maximum length a data link layer can transmit, the datagram has to be fragmented.


For each network interface, the Maximum Transmission Unit (MTU) specifies the maximum length of an IP datagram that can be transmitted over a given data link layer protocol.

For example Ethernet II and IEEE 802.3 networks have an MTU of 1500 bytes and 1492 bytes, respectively.

Some protocols set the MTU to the largest datagram size of 65536 bytes.

However the MTU for any data link layer protocol must be at least 576 bytes.

If an IP datagram exceeds the MTU size, the IP datagram is fragmented into multiple IP datagrams, or, if the DF flag is set in the IP header, the IP datagram is discarded.

IP Fragmentation Basics

When an IP datagram is fragmented, its payload is split into multiple IP datagrams, each satisfying the limit imposed by the MTU. Each fragment is an independent IP datagram, and is routed in the network independently from the other fragments. Fragmentation can occur at the sending host or at an intermediate router. It is even possible that an IP datagram is fragmented multiple times, e.g., an IP datagram may be transmitted on a network with an MTU of 4000 bytes, then forwarded to an network with an MTU of 2000 bytes, and then to a network with an MTU of 1000 bytes.

Fragments are reassembled only at the destination hosts. If a host receives fragments of a larger IP datagram it holds the fragments until the original IP datagram has been fully restored.

Fragments do not have to be received in the correct order. The destination host can use the fragment offset field to place each fragment in the right position.

IP assumes that a fragment is lost if no new fragment have been received for a timeout period. If such a timeout occurs, all fragments of the original datagram that have been received so far are discarded.

Involved Header Fields

Fragmentation of IP datagrams involves the following fields in the IP header: total length, identification, DF and MF flags, and fragment offset. The fields that are relevant during fragmentation are included in the figure. In the figure an IP datagram with a length of 2400 bytes is transmitted on a network with an MTU of 1000. We assume that the IP header of the datagram has the minimum size of 20 bytes. Since the DF flag is not set in the original IP datagram on the left, the IP datagram is now split into three fragments. All fragments are given the same identification as the original IP datagram. The destination host uses the identification field when reassembling the original IP datagram. The first and second IP datagram have the MF flag set, indicating to the destination host that there are more fragments to come. Without this flag, the receiver of fragments cannot determine if it has received the last fragment.

Fragment Size & Fragment Offset

To determine the size of the fragments we recall that, since there are only 13 bits available for the fragment offset, the offset is given as a multiple of eight bytes.

As a result, the first and second fragment have a size of 996 bytes (and not 1000 bytes).

This number is chosen since 976 is the largest number smaller than 1000-20 = 980 that is divisible by eight. The payload for the first and second fragments is 976 bytes long, with bytes 0 through 975 of the original IP payload in the first fragment, and bytes 976 through 1951 in the second fragment.

The payload of the third fragment has the remaining 428 bytes, from byte 1952 through 2379. With these considerations, we can determine the values of the fragment offset, which are $0, 976/8$, and $1952/8 = 244$, respectively, for the first, second and third fragment.

Drawbacks of Fragmentation

Even though IP fragmentation provides flexibility that can deal effectively with heterogeneity at the data link layer, and can hide this heterogeneity to the transport layer, it is has considerable drawbacks.

For one, fragmentation involves significant processing overhead. Also, if a single fragment of an IP datagram is lost, the entire IP datagram needs to be retransmitted (by a transport protocol). To avoid fragmentation, TCP tries to set the maximum size of TCP segments to conform to the smallest MTU on the path, thereby avoiding fragmentation.

Likewise, applications that send UDP datagrams often avoid fragmentation by limiting the size of UDP datagrams to 512 bytes, thereby, ensuring that the IP datagrams is smaller than the minimum MTU of 576 bytes.

Note that in IPv6, it is generally deprecated to fragment IP datagrams.

Further Resources:

Things about NATs:


Count-To-Infinity-Problem when routing

anchored to 143.00_anchor requires knowledge about 143.03_dynamic_routing specifically RIP

Theres a good example denoting how Count-To-Infinity may look in a network: link to rfc:

Unfortunately, the question of how long convergence will take is not amenable to quite so simple an answer. Before going any further, it will be useful to look at an example (taken from [[2]( ""Data Networks"")]). Note, by the way, that what we are about to show will not happen with a correct implementation of RIP. We are trying to show why certain features are needed. Note that the letters correspond to gateways, and the lines to networks.

 \   / \
  \ /  |
   C  /    all networks have cost 1, except
   | /     for the direct link from C to D, which
   |/      has cost 10
   |<=== target network

Each gateway will have a table showing a route to each network.

However, for purposes of this illustration, we show only the routes from each gateway to the network marked at the bottom of the diagram.

D: directly connected, metric 1 B: route via D, metric 2 C: route via B, metric 3 A: route via B, metric 3

Now suppose that the link from B to D fails. The routes should now adjust to use the link from C to D. Unfortunately, it will take a while for this to this to happen. The routing changes start when B notices that the route to D is no longer usable. For simplicity, the chart below assumes that all gateways send updates at the same time. The chart shows the metric for the target network, as it appears in the routing table at each gateway.

time ------>

D: dir, 1 dir, 1 dir, 1 dir, 1 ... dir, 1 dir, 1 B: unreach C, 4 C, 5 C, 6 C, 11 C, 12 C: B, 3 A, 4 A, 5 A, 6 A, 11 D, 11 A: B, 3 C, 4 C, 5 C, 6 C, 11 C, 12

dir = directly connected unreach = unreachable

Here's the problem: B is able to get rid of its failed route using a timeout mechanism. But vestiges of that route persist in the system for a long time. Initially, A and C still think they can get to D via B. So, they keep sending updates listing metrics of 3. In the next iteration, B will then claim that it can get to D via either A or C. Of course, it can't. The routes being claimed by A and C are now gone, but they have no way of knowing that yet. And even when they discover that their routes via B have gone away, they each think there is a route available via the other. Eventually the system converges, as all the mathematics claims it must. But it can take some time to do so. The worst case is when a network becomes completely inaccessible from some part of the system. In that case, the metrics may increase slowly in a pattern like the one above until they finally reach infinity. For this reason, the problem is called "counting to infinity".

Routing Information Protocol

anchored to 143.00_anchor

belongs to 143.02_routing_basics

many information were taken from here:

The Routing Information Protocol, or RIP, as it is more commonly called, is one of the most enduring of all routing protocols. RIP is also one of the more easily confused protocols because a variety of RIP-like routing protocols proliferated, some of which even used the same name! RIP and the myriad RIP-like protocols were based on the same set of algorithms that use distance vectors to mathematically compare routes to identify the best path to any given destination address. These algorithms emerged from academic research that dates back to 1957.

Today's open standard version of RIP, sometimes referred to as IP RIP, is formally defined in two documents: Request For Comments (RFC) 1058 and Internet Standard (STD) 56. As IP-based networks became both more numerous and greater in size, it became apparent to the Internet Engineering Task Force (IETF) that RIP needed to be updated. Consequently, the IETF released RFC 1388 in January 1993, which was then superceded in November 1994 by RFC 1723, which describes RIP 2 (the second version of RIP). These RFCs described an extension of RIP's capabilities but did not attempt to obsolete the previous version of RIP. RIP 2 enabled RIP messages to carry more information, which permitted the use of a simple authentication mechanism to secure table updates. More importantly, RIP 2 supported subnet masks, a critical feature that was not available in the original RIP.

Routing Updates

RIP sends routing-update messages at regular intervals and when the network topology changes. When a router receives a routing update that includes changes to an entry, it updates its routing table to reflect the new route. The metric value for the path is increased by 1, and the sender is indicated as the next hop. RIP routers maintain only the best route (the route with the lowest metric value) to a destination. After updating its routing table, the router immediately begins transmitting routing updates to inform other network routers of the change. These updates are sent independently of the regularly scheduled updates that RIP routers send.

RIP Routing Metric

RIP uses a single routing metric (hop count) to measure the distance between the source and a destination network. Each hop in a path from source to destination is assigned a hop count value, which is typically 1. When a router receives a routing update that contains a new or changed destination network entry, the router adds 1 to the metric value indicated in the update and enters the network in the routing table. The IP address of the sender is used as the next hop.

RIP Stability Features

RIP prevents routing loops from continuing indefinitely by implementing a limit on the number of hops allowed in a path from the source to a destination. The maximum number of hops in a path is 15. If a router receives a routing update that contains a new or changed entry, and if increasing the metric value by 1 causes the metric to be infinity (that is, 16), the network destination is considered unreachable. The downside of this stability feature is that it limits the maximum diameter of a RIP network to less than 16 hops.

RIP includes a number of other stability features that are common to many routing protocols. These features are designed to provide stability despite potentially rapid changes in a network's topology. For example, RIP implements the split horizon and holddown mechanisms to prevent incorrect routing information from being propagated.

RIP Timers

RIP uses numerous timers to regulate its performance. These include a routing-update timer, a route-timeout timer, and a route-flush timer. The routing-update timer clocks the interval between periodic routing updates. Generally, it is set to 30 seconds, with a small random amount of time added whenever the timer is reset. This is done to help prevent congestion, which could result from all routers simultaneously attempting to update their neighbors. Each routing table entry has a route-timeout timer associated with it. When the route-timeout timer expires, the route is marked invalid but is retained in the table until the route-flush timer expires.

Loop-prevention : split horizon with poisoned reverse

taken from RFC1058

Split Horizon: ( with reverse poisoned )

Note that some of the problem above is caused by the fact that A and C are engaged in a pattern of mutual deception.
Each claims to be able to get to D via the other. This can be prevented by being a bit more careful about where information is sent.
In particular, it is never useful to claim reachability for a destination network to the neighbor(s) from which the route was learned.

[!Definition] "Split horizon" Split Horizon is a scheme for avoiding problems caused by including routes in updates sent to the gateway from which they were learned. The "simple splithorizon" scheme omits routes learned from one neighbor in updates sent to that neighbor.
"Split horizon with poisoned reverse" includes such routes in updates, but sets their metrics to infinity.

If A thinks it can get to D via C, its messages to C should indicate that D is unreachable. If the route through C is real, then C either has a direct connection to D, or a connection through some other gateway. C's route can't possibly go back to A, since that forms a loop. By telling C that D is unreachable, A simply guards against the possibility that C might get confused and believe that there is a route through A. This is obvious for a point to point line. But consider the possibility that A and C are connected by a broadcast network such as an Ethernet, and there are other gateways on that network. If A has a route through C, it should indicate that D is unreachable when talking to any other gateway on that network. The other gateways on the network can get to C themselves. They would never need to get to C via A. If A's best route is really through C,no other gateway on that network needs to know that A can reach D. This is fortunate, because it means that the same update message thatis used for C can be used for all other gateways on the same network. Thus, update messages can be sent by broadcast.

In general, split horizon with poisoned reverse is safer than simple split horizon.
If two gateways have routes pointing at each other, advertising reverse routes with a metric of 16 will break the loop immediately. If the reverse routes are simply not advertised, the erroneous routes will have to be eliminated by waiting for a timeout. However, poisoned reverse does have a disadvantage: it increases the size of the routing messages.
Consider the case of a campus backbone connecting a number of different buildings. In each building, there is a gateway connecting the backbone to a local network. Consider what routing updates those gateways should broadcast on the backbone network.
All that the rest of the network really needs to know about each gateway is what local networks it is connected to.
Using simple split horizon, only those routes would appear in update messages sent by the gateway to the backbone network. If split horizon with poisoned reverse is used, the gateway must mention all routes that it learns from the backbone, with metrics of 16. If the system is large, this can result in a large update message, almost all of whose entries indicate unreachable networks.

In a static sense, advertising reverse routes with a metric of 16 provides no additional information.
If there are many gateways on one broadcast network, these extra entries can use significant bandwidth. The reason they are there is to improve dynamic behavior .When topology changes, mentioning routes that should not go through the gateway as well as those that should can speed up convergence. However, in some situations, network managers may prefer to accept somewhat slower convergence in order to minimize routing overhead. Thus implementors may at their option implement simple split horizon rather than split horizon with poisoned reverse, or they may provide a configuration option that allows the network manager to choose which behavior to use. It is also permissible to implement hybrid schemes that advertise some reverse routes with a metric of 16 and omit others. An example of such a scheme would be to use a metric of 16 for reverse routes for a certain period of time after routing changes involving them, and thereafter omitting them from updates.

BGP - Border Gateway Protocol

anchored to 143.00_anchor source cisco docu


The Border Gateway Protocol (BGP) is an interautonomous system routing protocol. An autonomous system is a network or group of networks under a common administration and with common routing policies. BGP is used to exchange routing information for the Internet and is the protocol used between Internet service providers (ISP). Customer networks, such as universities and corporations, usually employ an Interior Gateway Protocol (IGP) such as RIP or OSPF for the exchange of routing information within their networks. Customers connect to ISPs, and ISPs use BGP to exchange customer and ISP routes. When BGP is used between autonomous systems (AS), the protocol is referred to as External BGP (EBGP). If a service provider is using BGP to exchange routes within an AS, then the protocol is referred to as Interior BGP (IBGP).

[!Tip] Necessity for BGPs When structuring / establishing a network - better, the internet - we have to connect between different Autonomous Systems (AS) and somehow interchange/exchange their routing information / routing tables. For that we use BGP to establish and update on different routing entries so that we can traverse different AS to find a given subnet for example.


BGP is a very robust and scalable routing protocol, as evidenced by the fact that BGP is the routing protocol employed on the Internet. At the time of this writing, the Internet BGP routing tables number more than 90,000 routes. To achieve scalability at this level, BGP uses many route parameters, called attributes, to define routing policies and maintain a stable routing environment.

In addition to BGP attributes, classless interdomain routing (CIDR) is used by BGP to reduce the size of the Internet routing tables. For example, assume that an ISP owns the IP address block 195.10.x.x from the traditional Class C address space. This block consists of 256 Class C address blocks, 195.10.0.x through 195.10.255.x. Assume that the ISP assigns a Class C block to each of its customers. Without CIDR, the ISP would advertise 256 Class C address blocks to its BGP peers. With CIDR, BGP can supernet the address space and advertise one block, 195.10.x.x. This block is the same size as a traditional Class B address block. The class distinctions are rendered obsolete by CIDR, allowing a significant reduction in the BGP routing tables.

BGP neighbors exchange full routing information when the TCP connection between neighbors is first established. When changes to the routing table are detected, the BGP routers send to their neighbors only those routes that have changed. BGP routers do not send periodic routing updates, and BGP routing updates advertise only the optimal path to a destination network.

BGP Attributes

Routes learned via BGP have associated properties that are used to determine the best route to a destination when multiple paths exist to a particular destination. These properties are referred to as BGP attributes, and an understanding of how BGP attributes influence route selection is required for the design of robust networks. This section describes the attributes that BGP uses in the route selection process:

  • Weight
  • Local preference
  • Multi-exit discriminator
  • Origin
  • AS_path
  • Next hop
  • Community

Weight Attribute

Weight is a Cisco-defined attribute that is local to a router. The weight attribute is not advertised to neighboring routers. If the router learns about more than one route to the same destination, the route with the highest weight will be preferred. In the figure below, Router A is receiving an advertisement for network from routers B and C. When Router A receives the advertisement from Router B, the associated weight is set to 50. When Router A receives the advertisement from Router C, the associated weight is set to 100. Both paths for network will be in the BGP routing table, with their respective weights. The route with the highest weight will be installed in the IP routing table.

Figure: BGP Weight Attribute

Local Preference Attribute

The local preference attribute is used to prefer an exit point from the local autonomous system (AS). Unlike the weight attribute, the local preference attribute is propagated throughout the local AS. If there are multiple exit points from the AS, the local preference attribute is used to select the exit point for a specific route. In the image below, AS 100 is receiving two advertisements for network from AS 200. When Router A receives the advertisement for network, the corresponding local preference is set to 50. When Router B receives the advertisement for network, the corresponding local preference is set to 100. These local preference values will be exchanged between routers A and B. Because Router B has a higher local preference than Router A, Router B will be used as the exit point from AS 100 to reach network in AS 200.

Figure: BGP Local Preference Attribute

Multi-Exit Discriminator Attribute

The multi-exit discriminator (MED) or metric attribute is used as a suggestion to an external AS regarding the preferred route into the AS that is advertising the metric.

The term suggestion is used because the external AS that is receiving the MEDs may be using other BGP attributes for route selection. We will cover the rules regarding route selection in the next section. In Figure: BGP Multi-Exit Discriminator Attribute, Router C is advertising the route with a metric of 10, while Route D is advertising with a metric of 5. The lower value of the metric is preferred, so AS 100 will select the route to router D for network in AS 200. MEDs are advertised throughout the local AS.

Origin Attribute

The origin attribute indicates how BGP learned about a particular route. The origin attribute can have one of three possible values:

  • IGP - The route is interior to the originating AS. This value is set when the network router configuration command is used to inject the route into BGP.
  • EGP - The route is learned via the Exterior Border Gateway Protocol (EBGP).
  • Incomplete - The origin of the route is unknown or learned in some other way. An origin of incomplete occurs when a route is redistributed into BGP.

The origin attribute is used for route selection and will be covered in the next section.

Figure: BGP Multi-Exit Discriminator Attribute

AS_path Attribute

When a route advertisement passes through an autonomous system, the AS number is added to an ordered list of AS numbers that the route advertisement has traversed. Figure: BGP AS-path Attribute shows the situation in which a route is passing through three autonomous systems.

AS1 originates the route to and advertises this route to AS 2 and AS 3, with the AS_path attribute equal to {1}. AS 3 will advertise back to AS 1 with AS-path attribute {3,1}, and AS 2 will advertise back to AS 1 with AS-path attribute {2,1}. AS 1 will reject these routes when its own AS number is detected in the route advertisement. This is the mechanism that BGP uses to detect routing loops. AS 2 and AS 3 propagate the route to each other with their AS numbers added to the AS_path attribute. These routes will not be installed in the IP routing table because AS 2 and AS 3 are learning a route to from AS 1 with a shorter AS_path list.

Next-Hop Attribute

The EBGP next-hop attribute is the IP address that is used to reach the advertising router. For EBGP peers, the next-hop address is the IP address of the connection between the peers. For IBGP, the EBGP next-hop address is carried into the local AS, as illustrated in below.

Figure: BGP AS-path Attribute

Router C advertises network with a next hop of When Router A propagates this route within its own AS, the EBGP next-hop information is preserved. If Router B does not have routing information regarding the next hop, the route will be discarded. Therefore, it is important to have an IGP running in the AS to propagate next-hop routing information.

Community Attribute

The community attribute provides a way of grouping destinations, called communities, to which routing decisions (such as acceptance, preference, and redistribution) can be applied. Route maps are used to set the community attribute. Predefined community attributes are listed here:

  • no-export - Do not advertise this route to EBGP peers.
  • no-advertise - Do not advertise this route to any peer.
  • internet - Advertise this route to the Internet community; all routers in the network belong to it.

Figure: BGP no-export Community Attribute illustrates the no-export community. AS 1 advertises to AS 2 with the community attribute no-export. AS 2 will propagate the route throughout AS 2 but will not send this route to AS 3 or any other external AS.

Figure: BGP no-export Community Attribute

BGP Path Selection

BGP could possibly receive multiple advertisements for the same route from multiple sources. BGP selects only one path as the best path. When the path is selected, BGP puts the selected path in the IP routing table and propagates the path to its neighbors. BGP uses the following criteria, in the order presented, to select a path for a destination:

  • If the path specifies a next hop that is inaccessible, drop the update.
  • Prefer the path with the largest weight.
  • If the weights are the same, prefer the path with the largest local preference.
  • If the local preferences are the same, prefer the path that was originated by BGP running on this router.
  • If no route was originated, prefer the route that has the shortest AS_path.
  • If all paths have the same AS_path length, prefer the path with the lowest origin type (where IGP is lower than EGP, and EGP is lower than incomplete).
  • If the origin codes are the same, prefer the path with the lowest MED attribute.
  • If the paths have the same MED, prefer the external path over the internal path.
  • If the paths are still the same, prefer the path through the closest IGP neighbor.
  • Prefer the path with the lowest IP address, as specified by the BGP router ID.

OSPF - Open Shortest Path First

anchored to 143.00_anchor source to be found here: cisco docu further information to be found: RFC 1247

Open Shortest Path First (OSPF) is a routing protocol developed for Internet Protocol (IP) networks by the Interior Gateway Protocol (IGP) working group of the Internet Engineering Task Force (IETF). The working group was formed in 1988 to design an IGP based on the Shortest Path First (SPF) algorithm for use in the Internet. Similar to the Interior Gateway Routing Protocol (IGRP), OSPF was created because in the mid-1980s, the Routing Information Protocol (RIP) was increasingly incapable of serving large, heterogeneous internetworks. This chapter examines the OSPF routing environment, underlying routing algorithm, and general protocol components.


OSPF was derived from several research efforts, including Bolt, Beranek, and Newman's (BBN's) SPF algorithm developed in 1978 for the ARPANET (a landmark packet-switching network developed in the early 1970s by BBN), Dr. Radia Perlman's research on fault-tolerant broadcasting of routing information (1988), BBN's work on area routing (1986), and an early version of OSI's Intermediate System-to-Intermediate System (IS-IS) routing protocol.

OSPF has two primary characteristics. The first is that the protocol is open, which means that its specification is in the public domain. The OSPF specification is published as Request For Comments (RFC) 1247. The second principal characteristic is that OSPF is based on the SPF algorithm, which sometimes is referred to as the Dijkstra algorithm, named for the person credited with its creation.

OSPF is a link-state routing protocol that calls for the sending of link-state advertisements (LSAs) to all other routers within the same hierarchical area. Information on attached interfaces, metrics used, and other variables is included in OSPF LSAs. As OSPF routers accumulate link-state information, they use the SPF algorithm to calculate the shortest path to each node.

As a link-state routing protocol, OSPF contrasts with RIP and IGRP, which are distance-vector routing protocols. Routers running the distance-vector algorithm send all or a portion of their routing tables in routing-update messages to their neighbors.

Routing Hierarchy

Unlike RIP, OSPF can operate within a hierarchy. The largest entity within the hierarchy is the autonomous system (AS), which is a collection of networks under a common administration that share a common routing strategy. OSPF is an intra-AS (interior gateway) routing protocol, although it is capable of receiving routes from and sending routes to other ASs.

An AS can be divided into a number of areas, which are groups of contiguous networks and attached hosts. Routers with multiple interfaces can participate in multiple areas. These routers, which are called Area Border Routers, maintain separate topological databases for each area.

A topological database is essentially an overall picture of networks in relationship to routers. The topological database contains the collection of LSAs received from all routers in the same area. Because routers within the same area share the same information, they have identical topological databases.

The term domain sometimes is used to describe a portion of the network in which all routers have identical topological databases. Domain is frequently used interchangeably with AS.

An area's topology is invisible to entities outside the area. By keeping area topologies separate, OSPF passes less routing traffic than it would if the AS were not partitioned.

Area partitioning creates two different types of OSPF routing, depending on whether the source and the destination are in the same or different areas. Intra-area routing occurs when the source and destination are in the same area; interarea routing occurs when they are in different areas.

An OSPF backbone is responsible for distributing routing information between areas. It consists of all Area Border Routers, networks not wholly contained in any area, and their attached routers.

In the figure, routers 4, 5, 6, 10, 11, and 12 make up the backbone. If Host H1 in Area 3 wants to send a packet to Host H2 in Area 2, the packet is sent to Router 13, which forwards the packet to Router 12, which sends the packet to Router 11. Router 11 then forwards the packet along the backbone to Area Border Router 10, which sends the packet through two intra-area routers (Router 9 and Router 7) to be forwarded to Host H2.

The backbone itself is an OSPF area, so all backbone routers use the same procedures and algorithms to maintain routing information within the backbone that any area router would. The backbone topology is invisible to all intra-area routers, as are individual area topologies to the backbone.

Areas can be defined in such a way that the backbone is not contiguous. In this case, backbone connectivity must be restored through virtual links. Virtual links are configured between any backbone routers that share a link to a nonbackbone area and function as if they were direct links.

AS border routers running OSPF learn about exterior routes through exterior gateway protocols (EGPs), such as Exterior Gateway Protocol (EGP) or Border Gateway Protocol (BGP), or through configuration information. For more information about these protocols.

SPF Algorithm

The Shortest Path First (SPF) routing algorithm is the basis for OSPF operations. When an SPF router is powered up, it initializes its routing-protocol data structures and then waits for indications from lower-layer protocols that its interfaces are functional.

After a router is assured that its interfaces are functioning, it uses the OSPF Hello protocol to acquire neighbors, which are routers with interfaces to a common network. The router sends hello packets to its neighbors and receives their hello packets. In addition to helping acquire neighbors, hello packets also act as keepalives to let routers know that other routers are still functional.

On multiaccess networks (networks supporting more than two routers), the Hello protocol elects a designated router and a backup designated router. Among other things, the designated router is responsible for generating LSAs for the entire multiaccess network. Designated routers allow a reduction in network traffic and in the size of the topological database.

When the link-state databases of two neighboring routers are synchronized, the routers are said to be adjacent. On multiaccess networks, the designated router determines which routers should become adjacent. Topological databases are synchronized between pairs of adjacent routers. Adjacencies control the distribution of routing-protocol packets, which are sent and received only on adjacencies.

Each router periodically sends an LSA to provide information on a router's adjacencies or to inform others when a router's state changes. By comparing established adjacencies to link states, failed routers can be detected quickly, and the network's topology can be altered appropriately. From the topological database generated from LSAs, each router calculates a shortest-path tree, with itself as root. The shortest-path tree, in turn, yields a routing table.

Additional OSPF Features

Additional OSPF features include equal-cost, multipath routing, and routing based on upper-layer type-of-service (TOS) requests. TOS-based routing supports those upper-layer protocols that can specify particular types of service. An application, for example, might specify that certain data is urgent. If OSPF has high-priority links at its disposal, these can be used to transport the urgent datagram.

OSPF supports one or more metrics. If only one metric is used, it is considered to be arbitrary, and TOS is not supported. If more than one metric is used, TOS is optionally supported through the use of a separate metric (and, therefore, a separate routing table) for each of the eight combinations created by the three IP TOS bits (the delay, throughput, and reliability bits). For example, if the IP TOS bits specify low delay, low throughput, and high reliability, OSPF calculates routes to all destinations based on this TOS designation.

IP subnet masks are included with each advertised destination, enabling variable-length subnet masks. With variable-length subnet masks, an IP network can be broken into many subnets of various sizes. This provides network administrators with extra network-configuration flexibility.

Dynamic Host Configuration Protocol (DHCP)

anchored to 143.00_anchor

These information were taken from the "internetpraktikum" course of uni tuebingen. I copy and modify / adapt some of their slides / notes to better my understanding further.


Manually configuring the network parameters of many computers may be tedious up to impossible. In the case that not all computers are connected permanently to the network, thus it is not required that all IP addresses are assigned simultaneously. A network may then need a smaller number of addresses than hosts connecting to it.


  • Simplified installation and maintenance of connected computers
  • Most widely automatic integration of computers into the intranet and internet respectively
  • Automatic assignment of the following among other items:
    • IP address & subnetmask
    • (default) gateway
    • DNS server

Design | Structure of DHCP

The design of DHCP supports many of the current network technologies. It it possible to expand its range of functions with interesting configuration parameters for future purposes. The first RFC to mention DHCP was 1531 a more up to date version is described in RFC 2131.

The protocol is based on the client/server model, its flow principally works like this:

  1. If DHCP is enabled on a client, it will broadcast a request to query a DHCP server for i. e. an IP address. In special circumstances this might happen via a relay server as shown in figure 9.2.1.
  2. The server will then respond with the requested configuration.

Figure 9.2.1 DHCP relay

Static allocation

A certain fixed IP address given to the client. The client-MAC address serves as an identifier. This assignment lasts for an indefinite period of time. This allocation method has the disadvantage that IPs that are statically assigned can not be assigned to other clients.

Automatic (static) allocation

When allocating IPs automatically the server is provided an IP-range. This means it holds a pool of IPs that may be assigned to computers. Again MAC addresses are added to identify the clients that are allowed to receive an IP address from the pool. The period of time is indefinite as well and this method has the same disadvantages as described in the last paragraph.

Dynamic allocation

The dynamic allocation works like the automatic allocation instead of listing the MAC-addresses a lease time is defined. This time usually ranges from a few minutes up to weeks. The clients are allowed to keep the IP addresses for the specified time. If the client does not signify the server that it still needs the address, it is then reassigned to other clients after the lease time is up. Normally a server keeps the IP address assigned to the client even if the lease is up just until all addresses in the pool are used up, then IP addresses with expired leases are reallocated. It is typical that clients receive the same address.

Mixed mode

Mixed means in this context e. g. that servers receive their IPs using the static allocation and clients are given IPs dynamically.


The IP assignment process is divided into these 4 steps that form the so called ‘DORA’ process. The title of the step is equivalent to the type of the packet sent by the client or server:

  1. DHCPDISCOVER - is broadcasted by clients to find available servers on their subnet.
  2. DHCPOFFER - is the servers response with an offer of configuration parameters, this is called a lease offer.
  3. DHCPREQUEST - is a client message to the server that can have different functions:
    1. request for the offered parameters (Lease-Request)
    2. request to check if the IP address in use is valid
    3. request to extend the lease time for current address
  4. DHCPACK - this server message contains the IP address and the other configuration parameters
  • DHCPNAK - is sent from server to client, it informs the client that it uses a wrong IP address (e. g. client changed the subnet) or the lease time is up
  • DHCPDECLINE - the client informs the server that the offered IP address is already in use by another client
  • DHCPRELEASE - the client informs the server that it releases the IP address and that the lease can be freed
  • DHCPINFORM - the client requests the local configuration parameters from the server (this is not the request for an IP address)

SLAAC | v6 Alternative to DHCP

As the name suggests, SLAAC is a mechanism to configure IPv6 addresses on hosts automatically without the need to keep a state at a server. For this purpose, SLAAC uses a three step process.

  1. When a network interface of a host becomes enabled, the host needs to assign itself a link-local IPv6 address. It does so by combining the link-local prefix FE80::0 with a unique identifier corresponding to the network interface. Before the host assigns this address to itself, it needs to check that it is unique. Once the link-local address is assigned to the interface, the host is able to communicate with it’s direct neighbours.

  2. After the link-local address is set up, the host initiates IPv6 Neighbour Discovery and sends a router Solicitation message to a special multicast group that contains all available routers. The routers will then answer with Router Advertisement packets that contain information about the local prefixes.

  3. The host then chooses a unicast address from the local prefix. The address is usually either choosen based on the MAC address of the network interface, or it is generated randomly using the IPv6 privacy extension. Before it configures the interface with this address, it needs to perform Duplicate Address Detection (DAD) to ensure that the address is not in use by another host already.


In some cases, configuring IPv6 addresses statelessly is not desirable. One example is when hosts should always be assigned the same IPv6 addresses. In this case, an adapted version of DHCP, DHCPv6, can be used. In contrast to IPv4’s DHCP, DHCPv6 is not run automatically as soon as a host is connected to a network, but only after a link-local IPv6 address has been assigned and a Router Advertisment message has been received that indicates that DHCPv6 is used in this network.

Music links

anchored to [[353.00_anchor]]

Thoughts about music

[!Tip] Cute comment I read about listening to music with people:

take from -> On my own Birthday partys i play my full music collection in shuffle mode, its around 200 gb from many different genres. The only choice you have, if you dont like a song is the skip button, nothing else is allowed. Sometimes there are some funny moments or jump scares when a brutal technical death metal song just comes after some lofi hiphop and then followed by techno or classical music... Very funny and always some surprises i did not even know myself :D

[!Quote] Message by @exyl_sounds regarding song linked above

found at

Fun lil backstory Before I even learned how to produce, I would always hang out in music producers' servers and in the help channels, people always used this website called "Clyp" to share what they're working on. Paper skies has been in the game grindin' for HELLA long because back in the day, people were really looking up to him and he was trending on clyp. His progress is honestly awesome to witness. Even through his ups and downs, bro is STILL grinding despite events as insanely unbased and not goated as Clyp shutting down and promoters gatekeeping him from playing shows. Thus, years later, as a fellow Canadian homie, when he asked me to be his collab partner for a 24 hour contest, you already know it was gonna be legendary. We made some absolutely INSANE progress on the song during the contest which you can actually hear in the mix I did a while ago. We really decided that this was a special song and we wanted to take our time with it, iron out all the mixing quirks, prepare visuals, etc. It is the exact type of nostalgic complextro goodness that takes me back to the times when I was still hearing EDM for the first time, and is a true 50/50 collab where both of us did our best to integrate our styles and preferences. Lastly, as a tribute to this mythical collision of fates, Paper Skies suggested we give out a free sample pack featuring our favorite sounds from the song =) Use them as you wish to flip, remix, or produce to your heart's desires, and feel free to tag us with your work too!

Collection of Artits/Music/Discoveries

  • Fleeting Words - NieR Replicant : youtube
  • vojum cooking on piano : vojum twitter
  • The Sleepwalk - warp wet woods ( android52 edit)
  • Quixsmell Gameboy youtube
  • Quixsmell Make me feel remix youtube
  • Quixsmell Ibanna Furrhy youtube
  • Quixsmell I got you remix youtube
  • Who Came After - the kid that could fly but did not know how youtube
  • Resom - Boiler room set berlin : youtube
  • Baribal - Czuje soundcloud


  • NTS -> Nuts to Soup Radio ||

  • refuge worldwide --> radio from berlin ||

  • TGV Synthwave mix: youtube

  • i dont remember that one youtube

cool artists I've found

cool songs found on soundcloud:

  • Virtual Riot - Continue (Remix) : soundcloud
  • RL Grime & JUelz - Formula ( IMANU REMIX) : soundclou
  • Mameyudoufu - Vintage Computers : soundcloud
    • very chiptuny / energydrink-ish song, its amazing!
  • Sleepnet - First Light (album) : bandcamp
  • Buunshin & IMANU - No tomorrow rework : soundcloud
  • Strangerbloom ( Stranger Things Tribute) : soundcloud
  • Mashaqill - Move on : soundcloud
  • DireDireDocks Eliminate mix : soundcloud
  • Xilart - Wab soundcloud
  • Sotareko - Nightscape : soundcloud
  • Asanity - Hello World : soundcloud
  • Grey,Virtual Riot - Raven : soundcloud
    • its amazing, so well engineered -- usual virtual Riot moment x)


  • Song of the Ancients / Devola : nier cover - song of ancients
  • Ashes of Dreams : ashes of dreams cover

Why I made this website


As described in index I always wanted to have a webpage that was easy to modify, compile and involved close to now "scripting" in html and css - I'm really bad at it and don't like it either.

... #TODO