In general, we can't know the values of states $\mathbf{q}$ of a state-space system. We only have access to the input $x$ and output $y$. Observers are used to estimate the values of states $q$ based on the input and output.
A realistic state-space model of a system includes some extra terms:
$$ \mathbf{q}[n+1] = \mathbf{A}\mathbf{q}[n] + \mathbf{b}x[n] + \mathbf{w}[n] $$ $$ y[n] = \mathbf{c}^T\mathbf{q}[n] + dx[n] + \zeta[n] $$
We can set up a real-time model that is a replica of the real system:
$$ \hat{\mathbf{q}}[n+1] = \mathbf{A}\hat{\mathbf{q}}[n] + \mathbf{b}x[n] $$ $$ \hat{\mathbf{y}}[n] = \mathbf{c}^T\hat{\mathbf{q}}[n] + dx[n] $$
Notice the differences between the model and the system:
The error $\tilde{\mathbf{q}}$ is the difference between estimated and actual states.
$$ \tilde{\mathbf{q}} = \mathbf{q} - \hat{\mathbf{q}} $$
The error evolves according to the following equation:
$$ \hat{\mathbf{q}}[n+1] = \mathbf{A}\hat{\mathbf{q}}[n] + w[n] $$
with initial condition:
$$ \tilde{\mathbf{q}}[0] = \mathbf{q}[0] - \hat{\mathbf{q}}[0] $$
Whether the error will go away (i.e. model and physical system converge) depends on the eigenvalues of $\mathbf{A}$.
We can manipulate the system matrix of the error system by adding feedback. Because we can't access the states of the physical plant, we will use the output of the plant.
$$ \hat{\mathbf{q}}[n+1] = \mathbf{A}\hat{\mathbf{q}}[n] + \mathbf{b}x[n] - \mathbf{l}(y[n] - \hat{y}[n]) $$
where
$$ \mathbf{l} = \begin{bmatrix} l_1 \\ l_2 \\ \vdots \\ l_L \end{bmatrix} $$
$$ y - \hat{y} = \mathbf{c}^T \tilde{\mathbf{q}} + \zeta $$
Substituting in the output error expression, we get:
$$ \tilde{\mathbf{q}}[n+1] = \mathbf{A} \tilde{\mathbf{q}}[n] + \mathbf{w}[n] + \mathbf{l}\mathbf{c}^T\tilde{\mathbf{q}}[n] + \mathbf{l}\zeta[n] $$ $$ = (\mathbf{A} + \mathbf{l} \mathbf{c}^T) \tilde{\mathbf{q}}[n] + \mathbf{w}[n] + \mathbf{l}\zeta[n] $$
The closed-loop state evolution equation has the system matrix $(\mathbf{A} + \mathbf{l}\mathbf{c}^T)$. We can set $\mathbf{l}$ to make this system stable: i.e. set the eigenvalues such that they have negative real parts (CT case) or magnitudes less than one (DT case). Unfortunately, the tradeoff is that the presence of $\mathbf{l} \neq \mathbf{0}$ gives us the $\mathbf{l}\zeta[n]$ term, which gives us an error based on output measurement noise.
As a reminder, an unobservable mode can not be observed in the output. This means that unobservable modes in the plant are also modes of the error system, no matter what $\mathbf{l}$ we choose.
For an unobservable mode associated with eigenvalue $\lambda_k$:
$$ \mathbf{A}\mathbf{v}_k = \lambda_k \mathbf{v}_k, \underbrace{\mathbf{c}^T\mathbf{v}_k}_{\xi_k} = 0 $$
$$ (\mathbf{A} + \mathbf{l}\mathbf{c}^T)\mathbf{v}_k = \mathbf{A}\mathbf{v}_k = \lambda\mathbf{v}_k $$
The observable modes of the plant can be moved to arbitrary self-conjugate locations by choice of $\mathbf{l}$. This can be done choosing $\mathbf{l}$ such that:
$$ \mathrm{det}(\lambda \mathbf{I} - \mathbf{A} - \mathbf{l}\mathbf{c}^T) = (\lambda - \epsilon_1)(\lambda - \epsilon_2) \dots (\lambda - \epsilon_L) $$
The reason we want state values is to implement state feedback.
Consider a system with the following state evolution equation:
$$ \mathbf{q}[n+1] = \mathbf{A} \mathbf{q}[n] + \mathbf{b} x[n] $$
If we know the values of the state variables $q$, then we can use those in the input $x$.
$$ x[n] = \mathbf{g}^T \mathbf{q}[n] + p[n] $$
Then, the closed-loop equation becomes:
$$ \mathbf{q}[n+1] = (\mathbf{A} + \mathbf{b} \mathbf{g}^T) \mathbf{q}[n] + \mathbf{b} x[n] + \mathbf{b} p[n] $$
And we have a new system matrix $(\mathbf{A} + \mathbf{b} \mathbf{g}^T)$ with new eigenvalues.
Since we cannot directly know the states $\mathbf{q}$ of the system, we can use the estimated states from the observer to approximate state feedback.
$$ x = \mathbf{g}^T \hat{\mathbf{q}} + p $$
where $\tilde{\mathbf{q}}$ is the estimated state vector.
The state vector of this system now has twice as many elements: the original state values and the estimated values from the observer.
One choice of the new state vector is:
$$ \begin{bmatrix} \mathbf{q} \\ \hat{\mathbf{q}} \end{bmatrix} $$
Another choice is:
$$ \begin{bmatrix} \mathbf{q} \\ \tilde{\mathbf{q}} \end{bmatrix} $$
where $\tilde{\mathbf{q}} = \mathbf{q} - \hat{\mathbf{q}}$ is the error between the estimated state vector and the true state vector.
Now we can rewrite the input $x$ as:
$$ x = \mathbf{g}^T (\mathbf{q} - \tilde{\mathbf{q}}) + p $$
Using this new state vector, the right side of our new state evolution equation becomes:
$$ \begin{bmatrix} \mathbf{A} + \mathbf{b}\mathbf{g}^T & -\mathbf{b}\mathbf{g}^T \\ \mathbf{0} & \mathbf{A} + \mathbf{l}\mathbf{c}^T \end{bmatrix} \begin{bmatrix} \mathbf{q} \\ \tilde{\mathbf{q}} \end{bmatrix} + \begin{bmatrix} \mathbf{b} \\ \mathbf{0} \end{bmatrix} p$$
The eigenvalues of a block triangular matrix are union of the eigenvalues of the top left and bottom right matrices.
$$ \lambda \left(\begin{bmatrix} \mathbf{M} & \mathbf{P} \\ \mathbf{0} & \mathbf{N} \end{bmatrix} \right) = \lambda(\mathbf{M}) \cup \lambda(\mathbf{N}) $$
Therefore, the eigenvalues of the whole observer-based state feedback system are the eigenvalues of $\mathbf{A} + \mathbf{b}\mathbf{g}^T$ and $\mathbf{A} + \mathbf{l}\mathbf{c}^T$.
State feedback does not affect reachability because it does not change the ability for the input to excite modes.
It is able to induce unobservability.