Parameterized Complexity and Exact Algorithms

We study problems such as:

There is typically a parameter that describes the input size (e.g. n in the problems above) and we would like to know how efficiently we can solve the problem in terms of n. We often view 'polynomial in n' as efficient, but for the problems above such algorithms are not known (and do not exist unless P=NP). 

So what can we do, if we do want to always receive the correct answer?

In parameterized complexity, this is taken one step further and we study the running time not just in terms of the input size, but also in terms of another parameter. This parameter can be, for example, the number of terminals t in Steiner Tree, the number of timeslots/colours in Graph Colouring or a parameter of the input capturing 'how much the network looks like a tree' (treewidth). There are no broadly three possibilities:

The idea is that the parameter k is much smaller than the input size n, but still we may want to find the optimal dependence both in terms of k and n. 

We can give upper bounds on the running time by providing an algorithm. Unfortunately, we have no idea on how to prove lower bounds, and instead prove claims of the form 'if problem X can be solved in polynomial time, then so can problem Y'. This allows one to define complexity classes of problems that are all 'equally hard' in some way. Moreover, we can also provide exact lower bounds on the running time by assuming exact bounds for one problem. For example, the Exponential Time Hypothesis typically allows to exclude subexponential running times (e.g. 'Problem X cannot be solved in time 2^{o(n)}' or 'cannot be solved in time 2^{o(k)}n') whereas the Strong Exponential Time Hypothesis may even allow to pinpoint the base constant ('Problem X cannot be solved in time 1.9999^n') .