Skip to content
Snippets Groups Projects
qip.tex 111 KiB
Newer Older
theochap's avatar
theochap committed
1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843
$h$ in \ref{subsubsec:algo_1_extensions}.

In our SVT algorithm, we build explicitly a unitary encoding
$(\mathcal{R}, R, U)$ of $\mathcal{P}$, inspired from the construction proposed in
\cite{gilyen_su_low_wiebe_2019} in the
\textit{Corollary 34}, which stems from Szegedy's construction of
Quantum Walk Operators (see \cite{szegedy}). Indeed, the unitary
$U = U_L^\dagger U_R$ is built from the two state preparation
unitaries $U_L$ and $U_R$:

\begin{align}
    U_R : \ket{0} \ket{x} &\rightarrow \sum_{y \in V} \sqrt{P_{xy}} \ket{x} \ket{y} \\
    U_L : \ket{0} \ket{y} &\rightarrow \sum_{x \in V} \sqrt{P^*_{yx}} \ket{x} \ket{y}
\end{align}

This implies, choosing $R = \widetilde{R} = (\ket{0}\bra{0} \otimes I)$, that $\widetilde{R}UR = \begin{pmatrix} \mathcal{P} & 0 \\ 0 & * \end{pmatrix}$.

We remind that $P^*_{x,y} := P_{y,x} \frac{\pi(y)}{\pi(x)}$, which allows getting $\mathcal{P}$ from $(\sqrt{P_{xy}P^*_{yx}})_{ij \in [1,n]}$.

Using the above operator, and applying them to the register $B$ and a $b$-size ancilla set of qubits, yields :

\begin{align}
    \sum_{k\in [1,n]} \sigma_k \ket{u_k}_A \widetilde{R}U_\phi R (\ket{v_k}_B \ket{0}_C) &= \sum_{k\in [1,n]} \sigma_k \ket{u_k}_A  (\sum_{i \in [1,n]} T(\sigma_i) \ket{u_i}\bra{v_i}\ket{v_k}_B) \ket{\zeta}_C )\\
    &= \sum_{k\in [1,n]} \sigma_k T(\sigma_k) \ket{u_k}_A  \ket{u_k}_B \ket{\zeta}_C \\
    &= \sum_{k\in [1,n]} Q(\sigma_k) \ket{u_k}_A  \ket{u_k}_B \ket{\zeta}_C \\
    &= \sum_{k\in [1,n]} Q(\sigma_k) \ket{u_k}_A  \ket{u_k}_B
\end{align}

Let's simply recall that the states $\ket{v_i}$ form an orthonormal basis of
vectors, which allows one to derive the second line equality in the
previous set of equations.

The register $C$ being unentangled with the register $B$, due to the diagonal
block-form of the operator $\widetilde{R}U_\phi R$, one can uncompute
the register without modifying the global state of the system.

\paragraph{Circuit implementation of the algorithm:}
Figure \ref{fig:quantum_circuit_algo_1} presents a 
quantum circuit implementation of the algorithm introduced in the previous paragraph.
\begin{figure}
    \centering
    $\begin{array}{c}
    \Qcircuit {
    \lstick{C : \ket{0}_{n_b}} & {/} \qw &\qw &  \multigate{1}{\widetilde{R}U_\phi R} & \meter & \qw & \qw \\
    \lstick{B : \ket{0}_{n_b}} & {/} \qw & \multigate{1}{QRAM}  & \ghost{\widetilde{R}U_\phi R} &  \targ & \meter & \rstick{\ket{0}} \cw \\
    \lstick{A : \ket{0}_{n_b}} & {/} \qw & \ghost{QRAM} & \qw  & \ctrl{-1} &   \rstick{\sum_i \Tilde{h}(\lambda_i) \ket{u_i}} \qw \\
    & & \dstick{\ket{0}_A\ket{0}_B \rightarrow \sum_{ij} P_{i,j} \ket{i}_A \ket{j}_B} \cwx & & & \\
    & & & & & &}       
    \end{array}$
    
    \bigskip
    \caption{The Quantum Circuit used by the Graph Fourier eigenbasis filtering}
    \label{fig:quantum_circuit_algo_1}
\end{figure}

\subsubsection{Complexity analysis.}\label{subsubsec:algo_1_complexity}

Our algorithm being the combination of three macroscopic quantum processes, one has to estimate the gate and memory cost of each
process and sum each individual computational cost to get the global
computational cost.\\

\paragraph{Cost of QRAM state preparation:}
As noticed in \ref{rmq:QRAM_assumptions}, several authors have
introduced Quantum Random Access Memory structures that are able to
prepare any loaded state in time $\mathcal{O}(polylog(n))$ with a low
degree of polynomial scaling.\\

\noindent\textbf{\textit{Cost of $\widetilde{R}U_\phi R$ preparation:}}\\
Let $(\widetilde{R}, R, U)$ be a unitary encoding of $P$, a real
operator. According to the results of \cite{gilyen_su_low_wiebe_2019}
(\textit{Corollary 18 \& Lemma 19}), given a real odd polynomial of
degree $d$, bounded by 1 in $[0,1]$, one can build $U_\phi$ - after
$\mathcal{O}(poly(d, log(\frac{1}{\epsilon})))$ preprocessing
computations on classical computer - using for $d$-times :
\begin{itemize}
    \item The operators $U$ and $U^\dagger$
    \item The gates $C_R NOT$ and $C_{\widetilde{R}} NOT$
    \item Single qubit gates.
\end{itemize}
the process uses a constant number of ancilla qubits. Which makes the overall number of gates required scaling as $\mathcal{O}(d \mathcal{T}(U))$
with $\mathcal{T}(U)$ the cost of implementing $C_R NOT$ and $U$ gates
\cite{gilyen_su_low_wiebe_2019}.\\

\noindent\textbf{\textit{Cost of last $C_{NOT}$ preparation:}}\\
The last  $C_R NOT$ operator requires the use of $n_b$ 2-qubit $CNOT$s, which makes an overall of $\mathcal{O}(log(n))$ quantum gates.\\

\paragraph{Patching up:}
By combining all the previous costs, one can deduce that the overall quantum gate cost for the circuit implementation
\ref{fig:quantum_circuit_algo_1} scales as
$\mathcal{O}(d \mathcal{T}(U) + S(log(n)))$ with $S$ a low degree
polynomial. This result proves a direct dependency between the number of quantum gates required and the degree of the simulated polynomial:
this explains why our algorithm is most efficient because, in general, Fourier Kernels are well approximated by low degree polynomials.

\subsection{Algorithm Discussions}\label{subsubsec:algo_1_extensions}

We have been able to derive several interesting and general results from a precise exploitation of the Quantum Singular Value Transformation algorithm introduced by A. Gilyén et \textit{al.} in
\cite{gilyen_su_low_wiebe_2019}. In this part we will analyse several extensions to this algorithm that may be of interest, while giving examples of Kernels efficiently implemented within the algorithm.

\subsubsection{Undirected graph version:}
Even though we have only worked with directed graphs, our Quantum Graph Frequency-Based Inverse Fourier Transform transposes well to the undirected framework. Indeed, undirected graphs are always reversible and symmetric which means that the reversible condition on
the input may be dropped in the undirected setting. Several Spectral Graph Theory applications only make use of undirected graphs
\cite{ortega_frossard_kovacevic_moura_vandergheynst_2018,
  shuman_narang_frossard_ortega_vandergheynst_2013,
  ricaud_borgnat_tremblay_goncalves_vandergheynst_2019} and our
algorithm may be of great use in those cases. Please note also that
these undirected graphs does not need to be regular and that our
results also apply to undirected non-regular graphs (whereas power
method for PageRank does not converge for these graphs).

\subsubsection{Non reversible graphs:}
As discussed before our algorithm is adapted for regular
non-reversible graphs: however, in this case, one looses the direct correspondence between transition matrix singular values and eigenvalues due to the absence of symmetries for the transition matrix. Indeed, the operator
$\mathcal{P} = \Pi^{-1/2} P \Pi^{1/2}$ is no longer self-adjoint and one can only claim that the singular values of $\mathcal{P}$ are the
square root of the eigenvalues of
$\mathcal{P}\mathcal{P}^* = \Pi^{-1/2} P \Pi P^* \Pi^{-1/2}$. In addition, the polynomials used for singular value transformation must be either even or odd (see \cite{gilyen_su_low_wiebe_2019}).

\subsubsection{Spectral Graph signal filtering application:}
Another interesting use case of the algorithm corresponds to a slightly modified version of our original problem. Given
$\mathcal{P} = \Pi^{1/2} P \Pi^{-1/2}$, where $P$ is a regular and
reversible transition matrix, and assuming that one can produce an initial quantum superposition of eigenvectors $\ket{u_k}$ of
$\mathcal{P}$, $\ket{\psi}_i = \sum_k c_k \ket{u_k}$, for every
polynomial $Q$ bounded by 1 on $[-1,1]$, the algorithm presented above can produce the output quantum state :

\begin{equation}
    \ket{\psi}_f = \sum_k c_k Q(\lambda_k) \ket{u_k} 
\end{equation}
    
Where $\lambda_k$ is the eigenvalue associated with $\ket{u_k}$.

Thus, if $h$ is an analytical function bounded by 1 on $[-1,1]$
$\epsilon$-approximated by a polynomial $Q_\epsilon$ on $[-1,1]$, one
can build the state to a precision $\epsilon$ on the values of
$h(\lambda_k)$ :

\begin{equation}
    \ket{\psi}_f = \sum_k c_k h(\lambda_k) \ket{u_k} 
\end{equation}

Results can also be derived for complex graph filters
(according to the corollary 8 of \cite{gilyen_su_low_wiebe_2019}),
providing that the norm of the graph filter is bounded by
$\nicefrac{1}{4}$ on $[-1,1]$. A consequence of this is that, given any initial graph signal $\phi$ (which can be written as a quantum superposition of $\mathcal{P}$ Fourier Basis eigenvectors), one can apply the linear filter $h(\lambda_1, \hdots, \lambda_n)$ to its Graph Fourier components, where $h$ is any real function, $\epsilon$-approximated by a given family of polynomials, and bounded by 1 on $[-1,1]$.

Signal filtering is ubiquitous in Spectral Graph Theory applications
\cite{sandryhaila_moura_2014,
  shuman_narang_frossard_ortega_vandergheynst_2013, sevi2019,
  ortega_frossard_kovacevic_moura_vandergheynst_2018}, and our
algorithm may be used to implement a wide range of such linear filters
on Graph Fourier Components, the only condition on these filters being
that it these must be bounded by 1 on $[-1,1]$. Note that any
continuous function can be made bounded by 1 on $[-1,1]$ by dividing
it by its maximum value on $[-1,1]$ (which exists since the function
is continuous).


\section{Spectral Graph Theory Quantum PageRank Algorithm.}\label{sec:SGTapplicationToPR}

\textit{Throughout this section, we will assume given an input
  directed weighed graph $\mathcal{G}=(V,E,W(E))$ and its associated
  transition matrix $P$.}

In this section, we will introduce a generalized version of the PageRank using Spectral Graph Theory. This generalized PageRank can be seen as a detailed application for the Quantum Algorithm (inverse graph Fourier transform) of \ref{sec:quantumGFT}. 

Using results stemming from Spectral Graph Theory, we are able to derive a generalization of the PageRank algorithm that is able to better translate local graph properties on the nodes ranking produced by the algorithm. This work is the first in its kind, as - to our knowledge - a complete adaptation of the
spectral graph theory to the PageRank has not yet been proposed. Our algorithm is able to adjust the influence of local and global graph structures to the final ranking, and can provide a whole new class of importance criteria that can be used to produce new node rankings for any graph. 
 
\subsection{Generalized Importance Vector}
In this subsection we precise the notion of ranking by \textit{order of importance} and generalize this notion, providing a new class of \textit{Generalised PageRank rankings}

\begin{definition}[Generalized importance
  vector]\label{def:generalized_importance_vector}
  Let $\lambda_1 \leq \lambda_2 \leq ... \leq \lambda_n$ be the eigenvalues of the Laplacian associated with a given transition matrix $P$ of the input graph and $(u_1, ..., u_n)$ a basis of unit eigenvectors associated to these eigenvalues.

  Given a non-increasing Graph Kernel $\mathbb{K}$ (see
  \ref{par:graph_kernel}), the $\mathbb{K}$-generalized importance
  vector is the inverse Fourier Transform of $\mathbb{K}$, \textit{i.e.} such that
  :.
\begin{equation}
    \forall (i,j) \in V^2. i \leq j \implies \mathbb{K}_i \geq \mathbb{K}_j.
\end{equation}

\end{definition}

Here are some remarks regarding the previous definition :
\begin{itemize}
\item Most of the time, the transition matrix $P$ will be chosen to be either the classical or the personalized Google Matrix. Since these matrices are regular and easily computable. However, we have also found that other candidates yield interesting properties: we will give some examples and discuss their usefulness in our framework.

\item A first useful example of a generalized importance vector is given by the Kernel $\delta^0$, where
  $\delta^0_{i} = \delta_{i \in \mathbb{J}}$ where
  $\mathbb{J} = \{ i \in V \mbox{ st: } \lambda_i = 0 \}$ and
  $\lambda_i$ is the $i^{th}$ eigenvalue of the Laplacian of the
  Google Matrix. In the case of the Google Matrix, this amounts to compute the classical PageRank, and then to compute the probability limit distribution of $\mathcal{G}$. Thus, we call the Kernel  $\delta^0$, the classical PageRank Kernel.
    
\item An interesting choice of Kernel may be the Heat Kernel
  $\mathbb{H} = (e^{-\lambda_1 t}, e^{-\lambda_2 t}, \hdots,
  e^{-\lambda_n t})$, $t \in \mathbb{R^+}$, this Kernel models the propagation of heat waves on the Laplacian: for a given choice of $t$, one can choose the duration of the heat propagation, and then carefully
  adjust the locality of the ranking. Note that the work from
  \cite{yang_king_lyu_2007}, corresponds in fact to the Heat Kernel particular case of our PageRank generalization.
    
\item The approach presented in \cite{baeza-yates_boldi_castillo_2006} cab also be treated in our framework. Indeed, the power series decomposition
  $\underset{k\in [|0,\infty|]}{\sum} \gamma_k P^k$ can be rewritten,,using the eigendecomposition of $P=JUJ^{-1}$ :
    
    \begin{align}
        \underset{k\in [|0,\infty|]}{\sum} \gamma_k P^k &= J (\underset{k\in [|0,\infty|]}{\sum} \gamma_k (diag(\lambda_1^k, \hdots, \lambda_n^k) + R^k) J^{-1} \\ &= J  (diag(f(\lambda_1), \hdots, f(\lambda_n)) + \underset{k\in [|0,r^{max}|]}{\sum} \gamma_k R^k) J^{-1}
    \end{align} 
    Where $r^{max} < \infty$ and
    $f : \mathbb{C} \rightarrow \mathbb{C}; z \rightarrow
    \underset{k\in [|0,\infty|]}{\sum} \gamma_k z^k$ is an analytic
    function.
    
\end{itemize}

\subsubsection*{A comment on the Google Matrix}\label{subsec:main_consequences}
The Fourier Transform framework allows us to derive a generalized version of the PageRank algorithm which may better distinguish secondary hubs of nodes. The generalized importance vector allows one to fine-tune the balance between global graph structure and local graph properties.

In fact, this property can be easily understood for the \textit{Heat
  Kernel}, $\mathcal{H}$ : when the parameter $t$ is close to 0, the
heat has not yet propagated through the whole input graph, leading to a more prominent local description of the input graph ; whereas when $t \rightarrow \infty$, one has access to a global description of the properties of the input graph.


To fully exploit the power of Spectral Graph Theory, we have introduced several classes of candidate matrices that can be used to compute the Generalized PageRank: indeed, we have seen in section 
\ref{sec:SGTIntro} that only the properties of connectivity and aperiodicity aare required to be able to apply the Perron-Frobenius Theorem. Thus the original Google Matrix captures a very specific case of matrix regularization, which does favor general graph structure
over local properties by adding a complete graph with low transition weights to the original graph. Similarly, the personalized PageRank use of the personalization vector does refine the ranking around a specific node but does introduce dense matrix rows which may be
complicated to deal with in practice \cite{langville_meyer_2004}.


\subsection{Generalising  the Google Matrix.}
We introduce a generalization of the Google Matrix, and study some
interesting applications of our novel PageRank Matrices - Almost
Directed Stochastic Matrices (ADSM) - while proving that the classical
PageRank Matrix is in fact a very particular instance of ADSM. 

This generalized version of the Google Matrix heavily
relies on the Spectral Graph Theory as it is inspired from the use of the \textit{reversibilized} and \textit{lazy} matrices versions in the transition matrix regularization process. Indeed, these two extended matrix forms are useful to prepare admissible candidates to the PageRank theorem -\textit{ i.e.}
regular Matrices. 

\subsubsection{Patching the dangling nodes}
We will use the following version of the Weakly
Preferential PageRank method (see \ref{def:dangling}):
\begin{equation}
    P = \Bar{P} + \sum_{i\leq C} c_{i} u_{i}^T
\end{equation}

Where $C$ is the number of connected components of $\mathcal{G}$ and
$c_{i}$ the indicator matrix for the $i^{th}$ connected component and $u_i$ a uniform distribution on the $i^{th}$ connected component of the graph.

This way to patch dangling nodes is interesting as it doesn't link two seemingly unrelated connected components: this is necessary if these
components stay distinct in the final google matrix representation.

\subsubsection{Almost Directed Stochastic Matrices}
Next, we introduce the notion of Loop Completing Graph Extension that
heavily simplifies the expression of our generalized Google Matrix.

\begin{definition}[Loop Completing Graph Extension]
  Let $G = (V, E, W(E))$ be a directed weighted graph, we call a
  \textbf{loop completing directed weighted graph extension} (or
  simply loop completing graph extension) of G,
  $G^e = (V, E^e, W(E)^e)$, a graph that has the following properties:
\begin{enumerate}
    \item G is a subgraph of $G^e$, ie $E \subset E^e$
    \item $\forall x \in V$, $(x,x) \in E^e$ : the graph extension has self-looping edges.
    \item $\forall (i,j) \in E^e \setminus E, (j,i) \in E^e$ : the graph extension edges are undirected.
\end{enumerate}

\end{definition}

Then we rely on loop completing graph extensions to define a Generalized PageRank matrix - the \textit{Almost Directed Stochastic Matrices}, or ADSM.

The intuition behind ADSM is the following : given two nodes of any graph $\mathcal{G}$, $x$ and $y$, and a Markov chain $C$ on
$\mathcal{G}$, and assuming that the initial distribution probability
of the walkers is uniform (\textit{i.e.}
$\forall i \in V, \mathbb{P}(X_0= i)= \frac{1}{|V|}$) we have the
following equalities stemming from Bayes probability formula :

\begin{align}\label{eq:ADSM_intuition}
    P^{T_{stoch}}(y,x) := \mathbb{P}(X_0 = y | X_1 = x) &= \frac{\mathbb{P}(X_1=x | X_0 = y) \mathbb{P}(X_0 = y)}{\sum_{k\in V} \mathbb{P}(X_1 = y | X_0 = k) \mathbb{P}(X_0=k) } \\ 
    &= \frac{\mathbb{P}(X_1=x | X_0 = y)}{\sum_{k\in V} \mathbb{P}(X_1 = y | X_0 = k) } \\
    &= \frac{P_{y,x}}{\sum_{k\leq n} P_{k,x}}
\end{align}

Then, one can define a \textit{weak reversibilization} version
$P^{T_{stoch}}$ of a transition matrix using the last equation. This weak reversibilization unveils interesting results when walkers are uniformly distributed initially.

Equation \ref{eq:ADSM_intuition} shows a locally tempered behaviour for the reversed edges. The
more connected the node $y$, the lower $\mathbb{P}(X_1=x|X_0=y)$
and the higher $\mathbb{P}(X_1=y)$ will be: reversed edges penalize high connected nodes, producing a fairer ranking.  Thus, a an interpretation of weak and strong reversibilization is the following : the weak reversibilization satisfies the power equilibrium principle in the immediate neighbourhood of each node, whereas the strong reversibilization satisfies a global power equilibrium
principle, thanks to the limit probability distribution matrix $\pi$.

These matrices are called "almost directed" because the weight attributed to the reverse artificial nodes is small enough so that the probability of the random walker getting through these nodes is quite low (even though not equal to zero) :

\begin{definition}[Almost Directed Stochastic Matrices]\label{def:ADSM1}
Let P be a left stochastic matrix. We define :
\begin{equation}
    \mathcal{P} = (1-\alpha) P + \alpha (G^{gen})^{T_{stoch}}
\end{equation}
with

$(G^{gen})^{T_{stoch}} = (G^{gen})^{T} \boxtimes \frac{1}{G^{gen}_1}$, 

where $(\frac{1}{G^{gen}_1})_i = \left\{
    \begin{array}{ll}
        \frac{1}{\sum_{j\leq n} G^{gen}_{ji}} & \mbox{if $i$ has incoming nodes} \\
        0 & \mbox{ otherwise}
\end{array} \right.$ 

is a column vector and $\boxtimes$ is the elementwise product ($i^{th}$ row of $(G^{gen})^{T}$ multiplied by ($\frac{1}{G^{gen}_1}_i$)). $\alpha$ is a constant (usually small).

The \textit{Almost Directed Stochastic Matrices} is \textit{regular}, which means that one can apply the Perron-Frobenius theorem and compute a probability limit distribution.   

\end{definition}

\begin{proof}[Proof :] Let $P$ be this stochastic matrix, then by column permutation we can find a basis where $P$ has the form
  : \begin{equation}
P = \begin{pmatrix}
P_1 & 0 & \cdots & 0\\
0 & P_2 & \cdots & 0 \\
0 & 0 & \ddots & 0\\
0 & 0 & 0 & P_n
\end{pmatrix}
\end{equation}
Where $P_i$ is a stochastic matrix representing a strongly connected directed graph. Then
$\mathcal{P} = \begin{pmatrix} \mathcal{P}_1 & 0 & \cdots & 0\\
  0 & \mathcal{P}_2 & \cdots & 0 \\
  0 & 0 & \ddots & 0\\
  0 & 0 & 0 & \mathcal{P}_n
\end{pmatrix}$ where $\mathcal{P}_i$ is a regular stochastic matrix. The result holds.
\end{proof}

We introduce the \textit{Weak Stochastic
  Transposition} operator - $(\cdot)^{T_{stoch}}$ - such that
$(\cdot)^{T_{stoch}}: P \rightarrow P^{T_{stoch}}$, maps the set of left-stochastic matrices onto itself. This operator acts on any stochastic matrix (not solely the regular ones) and outputs a transition matrix with reversed edges by using only the original transition matrix. This operator is called \textit{Weak}, because it does not bear the usual transposition properties (linearity for instance), contrary to the reversibilization operator $(\cdot)^*$ that displays a behaviour similar transposition. Thus, one can regard the $(\cdot)^*$ operator as a \textit{Strong Stochastic Transposition}, that only acts on regular left-stochastic matrices.

The \textit{weak stochastic transition} can be efficiently constructed by saving the calculation of the sums $\frac{1}{G^{gen}_1}$ in the memory and updating them instead.

Figure \ref{fig:ADSM_construction} provides examples of ADSM graph construction.


\begin{figure}
\centering
\begin{subfigure}{0.45\textwidth}
\centering
 \resizebox{\linewidth}{!}{\begin{tikzpicture}[->,>=stealth',shorten >=5pt,auto,node distance=3.5cm,
                    semithick]
  \tikzstyle{every state}=[fill=red,draw=none,text=white]
  
  \node[state] (A)                    {$a$};
  \node[state]         (B) [above right of=A] {$b$};
  \node[state]         (D) [below right of=A] {$d$};
  \node[state]         (C) [below right of=B] {$c$};


  \path (A) edge              node[anchor = north, below] {0.5} (B)
            edge [loop left, blue]  node {0.33} (A)
            edge              node {0.25} (C)
        (B) edge    [bend right=80]          node[above] {0.33} (A)
            edge              node {0.25} (C)
            edge [loop above, blue]  node {0.5} (B)
        
        (C)  edge [loop right, blue]  node {0.25} (C)
        
        (D) edge            node[above] {0.25} (C)
            edge              node {0.33} (A)
            edge [loop below, blue]  node {1} (D);
\end{tikzpicture}}
\caption{Initial graph : we add looping edges to make the graph aperiodic}
\end{subfigure}%
\hfill
\begin{subfigure}{0.45\textwidth}
\centering
 \resizebox{\linewidth}{!}{\begin{tikzpicture}[->,>=stealth',shorten >=5pt,auto,node distance=3.5cm,
                    semithick]
  \tikzstyle{every state}=[fill=red,draw=none,text=white]
  
  \node[state] (A)                    {$a$};
  \node[state]         (B) [above right of=A] {$b$};
  \node[state]         (D) [below right of=A] {$d$};
  \node[state]         (C) [below right of=B] {$c$};


  \path (A) edge              node[anchor = north, below, text = purple] {0.471} (B)
            edge [loop left, blue, text=purple]  node {0.326} (A)
            edge    node {0.2125} (C)
            edge [bend right = 80, red, dashed] node[below] {0.035} (D)
            
        (B) edge    [bend right=80, text = purple]          node[above] {0.350} (A)
            edge              node[below] {0.2125} (C)
            edge [loop above, blue, text=purple]  node {0.494} (B)
        
        (C)  edge [loop right, blue]  node {0.3625} (C)
             edge [bend right = 80, red, dashed] node {0.035} (B)
             edge [bend left = 30, red, dashed] node[below] {0.035} (A)
             edge [bend left = 80, red, dashed] node {0.026} (D)
             
        (D) edge node[below] {0.2125} (C)
            edge [text=purple]              node {0.281} (A)
            edge [loop below, blue, text = purple]  node {0.939} (D);
\end{tikzpicture}}
\caption{Final Almost directed stochastic matrix : we added multiple reversed edges.}
\end{subfigure}
\caption{A construction example for Almost Directed Stochastic Matrices}
\label{fig:ADSM_construction}
\end{figure}

Besides, the notion of \textit{Almost Directed Stochastic Matrices} using \textit{Loop Completing Graph Extension} can also be seen as a generalisation of the classical PageRank. In fact, the google matrix is the particular
case in which $\mathcal{G}^e$ represents a complete graph.

Hence, the introduction of almost directed stochastic matrices based on the completing graph extension is relevant because according to the shape of the completing graph extension, the heat flow through the graph will change and the local properties of the graph won't be
translated the same way in the final result.

\subsection{Results and analysis}
\label{sec:results}

To better grasp our PageRank algorithm extensions we have produced several graph rankings on a classical
computer. Here we present results for various input graphs. In these numerical applications, we have used the \textit{reversed ADSM}, which is the \textit{reversibilized version} of the ADSM - $\frac{\mathcal{P} + \mathcal{P^*}}{2}$ - to comply with the definition of the directed graph Fourier transform in \ref{sec:SGTIntro}. 

Figures \ref{fig:simpleHeatK2} and \ref{fig:simpleHeatK3}
compare the Classical PageRank method to a Heat Kernel method (parameter $t=5$)
Importance distributions. The numerical data used to build these plots are provided in the figures \ref{fig:ranking2} and \ref{fig:ranking3}. On the figure \ref{fig:simpleheatKComparison}, one
can note that the importance vectors produced are fairly close to the standard classical PageRank importance vectors - the greater the time, the more damped the Kernel distribution and the Importance vector gets closer to
the stationary distribution. One can remark that for lower $t$  parameters ($t=5$ and below), the importance
vector unveils secondary hubs more clearly - this is shown by the greater oscillations of the Generalized Importance Vector associated with these Kernels. These oscillations can be clearly seen on figures
\ref{fig:complexHeatK1} and \ref{fig:complexHeatKComparison} where the Heat Kernel based Importance vectors attributes a higher score for nodes of secondary importance (\textit{i.e.} for nodes ranked after the fifth position for these graphs) when compared to the classical PageRank.

Figure \ref{fig:complexHeatKComparison} shows that this effect is even more accentauated for Heat Kernels with low time parameters ($t\leq 5$). This secondary hub highlight effect appears clearly for sufficiently large graphs (50 nodes and more), as it seems that, for very small graphs several boundary effects may affect the global ranking. Yet, this secondary node enhancement can be clearly seen on figures \ref{fig:simpleHeatK2} and \ref{fig:simpleHeatK3}: for the figure \ref{fig:simpleHeatK2} the effect is clear for the nodes 0, 3, 4, 5, 6 and 9; and for the figure \ref{fig:simpleHeatK3}, the nodes  0, 3, 4 and 10 exhibit an enhanced importance score when compared to the classical PageRank. This results in a more balanced importance distribution for the graphs associated with the Heat Kernels in figures \ref{fig:simpleHeatK2} and \ref{fig:simpleHeatK3}. 


\begin{figure}[H]
    \centering
    \begin{subfigure}{0.45\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple2_hk5_1.png}
        \caption{Generalized Importance Vector}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.45\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple2_hk5_classical.png}
        \caption{Classical PageRank}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.65\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple2_hk5_heat.png}
        \caption{Heat Kernel t=5, Reversed ADSM}
    \end{subfigure}
    
    \caption{Generalized PageRank Simulation Using a heat Kernel of parameter t=5 for the Reversibilized version of the ADSM}
    \label{fig:simpleHeatK2}
\end{figure}


\begin{figure}[H]
    \centering
    \begin{subfigure}{0.45\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple3_hk5_1.png}
        \caption{Generalized Importance Vector}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.45\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple3_hk5_classical.png}
        \caption{Classical PageRank}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.65\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple3_hk5_heat.png}
        \caption{Heat Kernel t=5, Reversed ADSM}
    \end{subfigure}
    
    \caption{Generalized PageRank Simulation Using a heat Kernel of parameter t=5 for the Reversibilized version of the ADSM of a simple graph}
    \label{fig:simpleHeatK3}
\end{figure}

\begin{figure}[H]
    \centering
    \begin{subfigure}{0.7\textwidth}
        \includegraphics[width=1\textwidth]{results_figure_simple/simple_hk579_1.png}
        \caption{Generalized Importance Vector}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.7\textwidth}
        \includegraphics[width=1\textwidth]{results_figure_simple/simple_hk579_2.png}
        \caption{Kernel distribution for generalized PageRank}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.47\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple_hk579_class.png}
        \caption{Classical PageRank}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.47\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple_hk579_heat9.png}
        \caption{Heat Kernel t=9, Reversed ADSM}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.47\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple_hk579_heat7.png}
        \caption{Heat Kernel t=7, Reversed ADSM}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.47\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/simple_hk579_heat5.png}
        \caption{Heat Kernel t=5, Reversed ADSM}
    \end{subfigure}
    
    \caption{Generalized PageRank Simulation Using different heat Kernels for the Reversibilized version of the ADSM of a complex graph}
    \label{fig:simpleheatKComparison}
\end{figure}

\begin{figure}[!ht]
    \centering
    \begin{subfigure}{0.7\textwidth}
        \includegraphics[width=1\textwidth]{results_figure_simple/complex_hk_135_1.png}
        \caption{Generalized Importance Vector}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.7\textwidth}
        \includegraphics[width=1\textwidth]{results_figure_simple/complex_hk_135_2.png}
        \caption{Kernel distribution for generalized PageRank}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.47\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/complex_hk_135_class.png}
        \caption{Classical PageRank}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.47\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/complex_hk_135_heat5.png}
        \caption{Heat Kernel t=5, Reversed ADSM}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.47\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/complex_hk_135_heat3.png}
        \caption{Heat Kernel t=3, Reversed ADSM}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.47\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/complex_hk_135_heat1.png}
        \caption{Heat Kernel t=1, Reversed ADSM}
    \end{subfigure}
    
    \caption{Generalized PageRank Simulation Using different heat Kernels for the Reversibilized version of the ADSM}
    \label{fig:complexHeatKComparison}
\end{figure}

Several interesting Kernels can be used to prepare less common rankings, we have given some examples of such Kernels in the figures peresented in appendix A : 
figure \ref{fig:cosK5} (cosine Kernel), figure  \ref{fig:monoK5} (monomial
Kernel) and figure \ref{fig:invK5} (inverse Kernel). The cosine Kernel, $(cos(\lambda_i \nicefrac{\pi}{4})^t)_i$, with $t=5$ in the figure \ref{fig:cosK5}, provides a considerably different ranking where external nodes achieve significantly higher rankings, to the detriment of central nodes which seem to be penalized by this importance distribution. The inverse kernel behaviour seems to be highly sensible to local graph structure: in figure \ref{fig:invK5}, this phenomenon is highlighted by the fact that the right part of the importance distribution graph absorbs most of the importance score for the inverse Heat Kernel, whereas this phenomenon does not appear in classical PageRank. Finally the monomial Kernel is illustrated in figure \ref{fig:monoK5}, provides a more balanced ranking than the other kernels for the same parameter: the difference between the most important nodes and the secondary ones is less visible for this kind of importance distribution.

Figure \ref{fig:benchmark} provides a first benchmark for
convergence analysis of ADSM using the power method. One notes that the number of iterations given a precision epsilon remains constant increasing the number of nodes of randomly generated Barabasi-Albert graphs. Adding this observation to the fact that ADSM are sparse
matrices, computing the reversibilized version of an ADSM can be done in $\mathcal{O}(nnz(P))$ operations, which $P$ the transition matrix of the input graph - which is at least as fast as the classical PageRank.


\begin{figure}[h!]
\centering
\csvreader[separator = semicolon, head to column names, 
tabular=|c|c|c|c|c|, before reading = \sisetup{
round-mode = places,
round-precision = 3
},
table head=\hline \multicolumn{1}{|p{2cm}|}{\centering Ranking} & \multicolumn{1}{p{2cm}|}{\centering Node \newline Classical \newline Kernel} & \multicolumn{1}{p{2cm}|}{\centering Score \newline Classical \newline Kernel} & \multicolumn{1}{p{2.5cm}|}{\centering Node \newline Heat Kernel \newline Time = 5} & \multicolumn{1}{p{2cm}|}{\centering Score  \newline Heat Kernel \newline Time = 5},
late after head=\\\hline\hline,
late after line=\csvifoddrow{\\\hline}{\\\hline}]
{results_csv/rankings_simple2_hk5.csv} {Ranking=\rank, Node Number Classical PageRank= \nnClass, Score Classical PageRank = \scoreClass, {Node Number Heat Kernel t = 5} =\nnHk, {Score Heat Kernel t = 5} =\scoreHk} {\rank & \nnClass & \tablenum{\scoreClass} & \nnHk & \tablenum{\scoreHk}}

    \caption{Ranking comparison between the classical PageRank and the generalized PageRank using the Heat Kernel of parameter t=5. Data from figure \ref{fig:simpleHeatK2}}
    \label{fig:ranking2}
\end{figure}

\begin{figure}[h!]
\centering
\csvreader[separator = semicolon, head to column names, 
tabular=|c|c|c|c|c|, before reading = \sisetup{
round-mode = places,
round-precision = 3
},
table head=\hline \multicolumn{1}{|p{2cm}|}{\centering Ranking} & \multicolumn{1}{p{2cm}|}{\centering Node \newline Classical \newline Kernel} & \multicolumn{1}{p{2cm}|}{\centering Score \newline Classical \newline Kernel} & \multicolumn{1}{p{2.5cm}|}{\centering Node \newline Heat Kernel \newline Time = 5} & \multicolumn{1}{p{2cm}|}{\centering Score  \newline Heat Kernel \newline Time = 5},
late after head=\\\hline\hline,
late after line=\csvifoddrow{\\\hline}{\\\hline}]
{results_csv/rankings_simple3_hk5.csv} {Ranking=\rank, Node Number Classical PageRank= \nnClass, Score Classical PageRank = \scoreClass, {Node Number Heat Kernel t = 5} =\nnHk, {Score Heat Kernel t = 5} =\scoreHk} {\rank & \nnClass & \tablenum{\scoreClass} & \nnHk & \tablenum{\scoreHk}}

    \caption{Ranking comparison between the classical Pagerank and the generalized pagerank using the Heat Kernel of Parameter t=5. Data from figure \ref{fig:simpleHeatK3}}
    \label{fig:ranking3}
\end{figure}
\newpage

\begin{figure}[H]
    \centering
    \begin{subfigure}{0.45\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/complex1_hk_5_1.png}
        \caption{Generalized Importance Vector}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.45\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/complex1_hk_5_class.png}
        \caption{Classical PageRank}
    \end{subfigure}
    \hfill
    \begin{subfigure}{0.65\textwidth}
        \includegraphics[width=1.1\textwidth]{results_figure_simple/complex1_hk_5_heat.png}
        \caption{Heat Kernel t=5, Reversed ADSM}
    \end{subfigure}
    
    \caption{Generalized PageRank Heat Kernel comparison with the Classical PageRank for a complex graph. Parameter t = 5}
    \label{fig:complexHeatK1}
\end{figure}

\begin{figure}[H]
    \centering
    \begin{tabular}{|c|c|}
    \hline
        \textbf{Number of nodes} & \textbf{Number of iterations}\\
        \hline \hline
        100 & 21 \\
        \hline
        215 & 22 \\
\hline 464 & 22 \\
\hline 1000 & 22 \\
\hline 2154 & 21 \\
\hline 4641 & 20 \\
\hline 10000 & 20 \\
\hline 21544 & 18 \\
\hline 46415 & 22 \\
\hline 100000 & 17\\
\hline
    \end{tabular}
    \caption{Results of convergence benchmark for the ADSM. Precision $10^{-7}$, graph type : Barabasi-Albert, parameter $5$}
    \label{fig:benchmark}
\end{figure}


\subsection{Generalized PageRank as an application to Quantum Inverse Graph Fourier Transform}
In the PageRank framework discussed in subsection
\ref{def:generalized_importance_vector}, the algorithm proposed in section \ref{sec:quantumGFT} can be 
directly used as it can simulate any $\epsilon$-polynomial approximate function of the transition matrix eigenvalues for any reversible Markov Chain.

A possible use case applied to the PageRank problem could be the following: after having computed a classical ranking with the ADSM structure introduced in section \ref{sec:SGTapplicationToPR} in a number of operations scaling linearly with the size of the input matrix, $N$, one can use the limit probability distribution vector to build a reversible transition matrix (in time $\mathcal{O}(N)$ due to the sparsity of ADSM) and apply the algorithm to build a finer ranking. Then amplitude estimation routines can be applied to the final quantum state to get refined rankings for hardly distinguishable sets of nodes. Since ADSM preserves strongly connected components, our quantum PageRank algorithm may be applied directly to a given connected component to produce a final quantum state representing the nodes of this particular connected component.

\subsection{Comparison with Paparo's discrete PageRank:} \label{par:paparo_comp} One can note
several similarities between our Generalized Quantum PageRank and the
one G. Paparo \textit{et al.} introduced in 2012 in
\cite{paparo_martin-delgado_2012}. The authors derive the following expression for the instantaneous PageRank score :

\begin{equation} \label{eq:instantaneous_pr}
    I_q(P_i, m) = \| \sum_{\mu_{i, \pm}} \mu_{i, \pm}^{2m} \bra{i}_2 \ket{\mu_{i, \pm}} \braket{\mu_{i, \pm} | \psi (0)} \|^2
\end{equation}

Where $\ket{\psi(0)}$ is an initial quantum state, $\mu_{i, \pm}$ are the eigenvalues of the Quantum Walk step operator that lie on the dynamical subspace (see \ref{sec:previousResults}) and
$\ket{\mu_{i, \pm}}$ the associated eigenvector. Given the classical Google Matrix $G$, if one notes $D$ the discriminant matrix defined by $D_{ij} = \sqrt{G_{ij} G_{ji}}$, $\lambda_i$ its eigenvalues and $\ket{\lambda_i}$ the associated eigenvectors, then Paparo \textit{et al.} derive the following set of relations (valid on the dynamical subspace, where $U^2$ the two-step walk operator acts non trivially):
\begin{align}
    \mu_{j, \pm} &= exp(\pm i \ arcos(\lambda_j)) \\
    \ket{\mu_{i, \pm}} &= \ket{\widetilde{\lambda}_i} - \mu_{i, \pm} S \ket{\widetilde{\lambda}_i}
\end{align}

Where $\ket{\widetilde{\lambda}_i} = A \ket{\lambda_i}$ where $A = \sum_{j=1}^N \ket{\psi_j} \bra{j}$ is an operator from the space of vertices $\mathbb{C}^N$, to the space of edges $\mathbb{C}^N \otimes \mathbb{C}^N$. The state $\ket{\psi_j}$ was defined in \ref{sec:previousResults}.

First note that one can apply directly our algorithm using a unitary encoding of $U_{dyn}$ (the walk operator restricted to the dynamical subspace) - since $U_{dyn}$ is unitary, its unitary encoding is itself. Then, using the singular value transformation to apply the polynomial $R_m=X^{2m}$ to the eigenspaces of $U_{dyn}$, which allows one to get the final state:

\begin{equation}
    \ket{\psi_f} = \sum_{\mu_{i, \pm}} \mu_{i, \pm}^{2m} \ket{\mu_{i, \pm}} \braket{\mu_{i, \pm}|\psi_{0}}
\end{equation}

Which exactly translates the expression of the instantaneous PageRank introduced above in \ref{eq:instantaneous_pr}. Indeed, let's remind that since $U_{dyn}$ is unitary, $U_{dyn}$ is normal and its left and right singular vectors coincide, which allows us to derive the above equation.

Note that, in equation \ref{eq:instantaneous_pr}, $\bra{i}_2$ is applied only to the last $n$ qubits of the space of the edges $\mathbb{C}^N \otimes \mathbb{C}^N$. Indeed, $U_{dyn}$ is a $2N \times 2N$ matrix (see \cite{paparo_martin-delgado_2012}). This feature is important in  \cite{paparo_martin-delgado_2012}: indeed, this phenomenon may explain some observed quantum effects on the final ranking - due to the non-trivial superposition of eigenstates represented by the sum of equation \ref{eq:instantaneous_pr}.

Computing the discrete quantum PageRank as Paparo \textit{et al.} proposed in \cite{paparo_martin-delgado_2012} becomes possible on a quantum computer providing that one can prepare the initial state $\ket{\psi_0} = \sum_i U_{dyn} \ket{i}\ket{j}$ efficiently. One would have to repeat the measurements of each instantaneous PageRank instance by applying successively the singular value transformation with the polynomials $R_m = X^{2m}$. However, let's point out that the eigenvalues of the dynamical subspace are of the form $e^{i\arccos(\lambda_i)}$ which becomes $e^{2mi\arccos(\lambda_i)}$ when exponentiated using $R_m$.  

Let's push the analogy with our method a bit further to try and understand the origin of quantum superposition that arises from the use of the walk operator $U_{dyn}$. Indeed, one has:

\begin{align}
    \ket{\psi_f} &= \sum_{i, \pm} \mu_{i, \pm}^{2m}(Id - S \mu_{i, \pm}) \ket{\widetilde{\lambda_i}} \braket{\mu_{i, \pm}| \psi(0)} \\
    &= \sum_{i, \pm} \mu_{i, \pm}^{2m} \ket{\widetilde{\lambda_i}} \braket{\widetilde{\lambda_i}| \psi(0)} - S \sum_{i, \pm} \mu_{i, \pm}^{2m+1} \ket{\widetilde{\lambda_i}} \braket{\mu_{i, \pm} |\psi(0)}
\end{align}

Let's choose the initial state $\ket{\psi(0)} = C \sum_{i} \ket{\widetilde{\lambda_i}}$, where $C$ is a normalization constant we will drop to simplify the notations.

\begin{align}
    \ket{\psi_f} &= \sum_{i, \pm}(\mu_{i,\pm}^{2m} - \lambda_i \mu_{i, \pm}^{2m+1}) \ket{\widetilde{\lambda_i}} - S \sum_{i, \pm} (\mu_{i, \pm}^{2m+1} - \lambda_i \mu_{i, \pm}^{2m+2} \ket{\widetilde{\lambda_i}} \\
    &= 2\sum_i (T_{2m}(\lambda_i) - \lambda_i T_{2m+1}(\lambda_i)) \ket{\widetilde{\lambda_i}} - 2 S \sum_i (T_{2m+1} (\lambda_i) - \lambda_i T_{2m+2}(\lambda_i)) \ket{\widetilde{\lambda_i}} \\
    &\propto \sum_i ( \lambda_i T_{2m+1}(\lambda_i) - T_{2m}(\lambda_i)) \ket{\widetilde{\lambda_i}} - U \sum_i (\lambda_i T_{2m+2}(\lambda_i) - T_{2m+1} (\lambda_i)) \ket{\widetilde{\lambda_i}}  \label{eq:final_state_paparo}
\end{align}

In the last line, we have dropped the two normalisation constants and used the fact that $U\ket{\widetilde{\lambda_i}} = S\ket{\widetilde{\lambda_i}}$ (see \cite{paparo_martin-delgado_2012}). 

$T_k$ is the $k^{th}$ Chebyshev Polynomial of the first kind. We use the following relation to derive the second equality:

\begin{equation}
    T_k(\lambda_i) = \frac{1}{2}( (\lambda_i + \sqrt{\lambda_i^2 -1})^k + (\lambda_i - \sqrt{\lambda_i^2 -1})^k) = \frac{1}{2} (\mu_{i, +} + \mu_{i, -})
\end{equation}

Equation \ref{eq:final_state_paparo} includes a linear combination of states that can be built using our Spectral Graph Theory quantum algorithm: 

$\sum_i (\lambda_i T_{2m+1}(\lambda_i) - T_{2m}(\lambda_i) ) \ket{\widetilde{\lambda_i}}$ and $ U \sum_i (\lambda_i T_{2m+2}(\lambda_i) - T_{2m+1} (\lambda_i) ) \ket{\widetilde{\lambda_i}}$.

Recently, the authors in \cite{gilyen_su_low_wiebe_2019}, have provided an efficient method to emulate the polynomial $T_{2m}$ in the singular value transformation, and the linear combination theorems of unitary block encoding. This method allows us to prepare efficiently the two former states in combination with our quantum algorithm. In this very special case, the amplitudes add up and create the quantum superposition effect exploited by Paparo \textit{et al.} to compute the instantaneous PageRanks: one can see that the first member frequencies are affected by the higher frequencies of the second term. The second term is related to the first one as it is a rotation (by the unitary $U$) on a higher degree filter of the eigenspaces of the discriminant matrix - in other words, using the Spectral Graph Theory framework, the instantaneous PageRank is derived from the interference of two Kernel generated graph signal states; one of which being rotated by the unitary $U$. 

This discussion paves the way to a generalized quantum based PageRank model, which adapts the Fourier Transform based generalized PageRank approach to the quantum domain. 

\begin{definition}{Quantum Fourier Transform based PageRank}
Let $G=(V,E)$ a directed graph, $(\lambda_1, \hdots, \lambda_n)$ be a Fourier Basis associated with G, $\mathbb{H}=(h_1, \hdots, h_n)$ a graph kernel with the associated coefficients $(\widehat{h}_1, \hdots, \widehat{h}_n)$ of the inverse graph Fourier Transform. Let $\ket{\widetilde{\lambda_i}}$ be the vector state defined as above. 

The (Instantaneous) Quantum Fourier Transform PageRank may be defined, like in \cite{paparo_martin-delgado_2012}, by: \\

\begin{equation}
    \mathcal{I}_{\mathbb{H}, PR} = \| \ket{\psi_f} \| = \|\sum_i \widehat{h}_i \ket{\widetilde{\lambda_i}} - U \sum_i \widehat{h}_i \ket{\widetilde{\lambda_i}}\|
\end{equation}
    
Note that if one chooses a sequence of Kernels $\mathbb{H}_k$, one can define a time averaged PageRank by :

\begin{equation}
    \forall T \in \mathbb{N}, \mathcal{P}_{T, \mathbb{H}, PR} = \frac{1}{T} \sum_i \mathcal{I}_{\mathbb{H}_i, PR}
\end{equation}

\end{definition}

The study of this generalized quantum PageRank formulation is left as an open research topic.

\section{Conclusion}
Our paper presents three contributions to the field of Quantum Information Processing:

\begin{itemize}

\item First, we introduce a Quantum Circuit based algorithm, dealing with a class of problems similar to the one of Adiabatic Quantum algorithm introduced by \cite{garnerone_zanardi_lidar_2012}. Our Circuit prepares the state $\ket{\pi}$, $\pi$ being the classical Google Importance vector, simulating an Hamiltonian for time $\mathcal{O}(\frac{log(N)}{\epsilon})$. We have then detailed several use cases for this algorithm.
    
\item The main contribution of the paper consists in the introduction of one of the first Quantum Based Algorithms performing a wide range of graph signal filters and Kernel-based signal production systems with a low gate complexity. This algorithm relies heavily on the Quantum Singular Value Transformation, a recent field of Quantum Signal Processing.
    
\item Our last contribution consists in applying the Spectral Graph Theory Framework to the PageRank algorithm theory in order to generalize the scope of
  the original algorithm. To this effect, we introduce a generalized PageRank ranking method based on the Directed Graph Fourier Transform which allows to produce more pertinent rankings by filtering the spectrum of the Google Graph. We then introduce an extension of the Google Matrix that can be used conjointly with the generalized PageRank algorithm to fully exploit its versatility. Finally, we provide
  an application example of our Spectral Graph Quantum Algorithm to the generalized PageRank model.
    
\end{itemize}

Thie work presented here, describing the links between Spectral Graph Theory, PageRank, and Quantum Information Processing, paves the way to a new, yet exciting extension of Quantum Signal Processing - the use of Quantum Algorithms in application to Spectral Graph Theory. To this extent,
one of the most interesting questions raised here may be to research more in depth for the generalization of  Quantum PageRank algorithms using our generalized PageRank approach.


\subsection*{Contributions}
T. Chapuis-Chkaiban, as the main author, produced most of the results presented in this paper and wrote the first version of the manuscript. Z.Toffano and B.Valiron, as co-authors, have made several research suggestions - both of them contributed equally to the scientific content. Besides, Z.Toffano and B.Valiron corrected and contributed to the writing and clarification of the manuscript and made several structural improvements for the final submitted version of this paper. 

\subsection*{Competing Interests}
The authors have no competing interests to declare that are relevant to the content of this article.

\subsection*{Data availability}
The datasets generated during and/or analysed during the current study are available in the  repository, [PERSISTENT WEB LINK TO DATASETS]

\newpage
\begin{appendices}

\section{Generalized PageRank using various Kernels.}

\begin{sidewaystable}
\sidewaystablefn%
\begin{center}
\begin{minipage}{\textheight}

\begin{figure}[H]
\centering
\csvreader[separator = semicolon, head to column names, 
tabular=|c|c|c|c|c|c|c|c|c|, before reading = \sisetup{
round-mode = places,
round-precision = 3
},
table head=\hline \multicolumn{1}{|p{1.5cm}|}{\centering Nodes \newline Ranking} & \multicolumn{1}{p{1.5cm}|}{\centering Node \newline Classical \newline Kernel} & \multicolumn{1}{p{1.5cm}|}{\centering Score \newline Classical \newline Kernel} & \multicolumn{1}{p{1.5cm}|}{\centering Node \newline Heat \newline Kernel \newline Time = 5} & \multicolumn{1}{p{1.5cm}|}{\centering Score  \newline Heat Kernel \newline Time = 5} & \multicolumn{1}{p{1.5cm}|}{\centering Node \newline Heat \newline Kernel \newline Time = 7} & \multicolumn{1}{p{1.5cm}|}{\centering Score  \newline Heat Kernel \newline Time = 7} & \multicolumn{1}{p{1.5cm}|}{\centering Node \newline Heat Kernel \newline Time = 9} & \multicolumn{1}{p{1.5cm}|}{\centering Score  \newline Heat Kernel \newline Time = 9},
late after head=\\\hline\hline,
late after line=\csvifoddrow{\\\hline}{\\\hline}]
{results_csv/rankings_simple1_hk579.csv} {Ranking=\rank, Node Number Classical PageRank= \nnClass, Score Classical PageRank = \scoreClass, {Node Number Heat Kernel t = 5} =\nnHkf, {Score Heat Kernel t = 5} =\scoreHkf, {Node Number Heat Kernel t = 7} =\nnHks, {Score Heat Kernel t = 7} =\scoreHks, {Node Number Heat Kernel t = 9} =\nnHkn, {Score Heat Kernel t = 9} =\scoreHkn} {\rank & \nnClass & \tablenum{\scoreClass} & \nnHkf & \tablenum{\scoreHkf} & \nnHks & \tablenum{\scoreHks} & \nnHkn & \tablenum{\scoreHkn}}

    \caption{Ranking comparison between the classical Pagerank and the generalized pagerank using the Heat Kernels of parameters 5, 7 and 9}
    \label{fig:noderanking1}
\end{figure}


\end{minipage}
\end{center}
\end{sidewaystable}

\begin{figure}[H]
    \centering
    \centerline{
    \includegraphics[width= 1.15\textwidth]{results_figures/complex_cosine2_5.png}
    }
    \caption{Generalized PageRank Cosine Kernel comparison with the Classical PageRank. Parameter : exponent = 5}
    \label{fig:cosK5}
\end{figure}

\begin{figure}[H]
    \centering
    \centerline{
    \includegraphics[width= 1.15\textwidth]{results_figures/complex_inverse_1_5.png}
    }
    \caption{Generalized PageRank Inverse Kernel comparison with the Classical PageRank. Parameter : exponent = 5}
    \label{fig:invK5}
\end{figure}


\begin{figure}[H]
    \centering
    \centerline{
    \includegraphics[width= 1.15\textwidth]{results_figures/complex_monomial2_5.png}
    }
    \caption{Generalized PageRank monomial Kernel comparison with the Classical PageRank. Parameter : exponent = 5}
    \label{fig:monoK5}
\end{figure}

\begin{figure}[H]
\centering
\csvreader[separator = semicolon, head to column names, 
longtable=|c|c|c|c|c|, before reading = \sisetup{
round-mode = places,
round-precision = 3
},
table head=\hline \multicolumn{1}{|p{2cm}|}{\centering Ranking} & \multicolumn{1}{p{2cm}|}{\centering Node \newline Classical \newline Kernel} & \multicolumn{1}{p{2cm}|}{\centering Score \newline Classical \newline Kernel} & \multicolumn{1}{p{2.5cm}|}{\centering Node \newline Heat Kernel \newline Time = 5} & \multicolumn{1}{p{2cm}|}{\centering Score  \newline Heat Kernel \newline Time = 5},
late after head=\\\hline\hline,
late after line=\csvifoddrow{\\\hline}{\\\hline}]
{results_csv/ranking_complex_hk5_short.csv} {Ranking=\rank, Node Number Classical PageRank= \nnClass, Score Classical PageRank = \scoreClass, {Node Number Heat Kernel t = 5} =\nnHk, {Score Heat Kernel t = 5} =\scoreHk} {\rank & \nnClass & \tablenum{\scoreClass} & \nnHk & \tablenum{\scoreHk}}

    \caption{Ranking comparison between the classical Pagerank and the generalized pagerank using the Heat Kernel of parameter t=5}
    \label{fig:ranking1}
\end{figure}




\end{appendices}

%%===========================================================================================%%
%% If you are submitting to one of the Nature Portfolio journals, using the eJP submission   %%
%% system, please include the references within the manuscript file itself. You may do this  %%
%% by copying the reference list from your .bbl file, paste it into the main manuscript .tex %%
%% file, and delete the associated \verb+\bibliography+ commands.                            %%
%%===========================================================================================%%

\bibliography{references}% common bib file
%% if required, the content of .bbl file can be included here once bbl is generated
%%\input sn-article.bbl

%% Default %%
%%\input sn-sample-bib.tex%

\end{document}

%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% End: