Summary: ``Determinant maximization problem gives a general framework that models problems arising in as diverse fields as statistics [1], convex geometry [2], fair allocations [3], combinatorics [4], spectral graph theory [5], network design, and random processes [6]. In an instance of a determinant maximization problem, we are given a collection of vectors $U=\{v_1,\dots,v_n\}\subset\Bbb R^d$, and a goal is to pick a subset $S\subseteq U$ of given vectors to maximize the determinant of the matrix $\sum_{i\in S}v_iv_i^{\ssf T}$ . Often, the set $S$ of picked vectors must satisfy additional combinatorial constraints such as cardinality constraint $(|S|\leq k)$ or matroid constraint ($S$ is a basis of a matroid defined on the vectors). \par ``In this paper, we give a polynomial-time deterministic algorithm that returns a $r^{O(r)}$-approximation for any matroid of rank $r \leq d$. This improves previous results that give $e^{O(r^2)}$-approximation algorithms relying on $e^{O(r^2)}$-approximate {\it estimation} algorithms [4], [7]-[9] for any $r\leq d$. All previous results use convex relaxations and their relationship to stable polynomials and strongly log-concave polynomials or non-convex relaxations for the problem [10]. In contrast, our algorithm builds on combinatorial algorithms for matroid intersection, which iteratively improve any solution by finding an {\it alternating negative cycle} in the {\it exchange graph} defined by the matroids. While the $\det(.)$ function is not linear, we show that taking appropriate linear approximations at each iteration suffice to give the improved approximation algorithm.''