Date of Award


Embargo Period


Degree Type


Degree Name

Doctor of Philosophy (PhD)


Electrical and Computer Engineering


Virgil Gligor

Second Advisor

Adrian Perrig


As society migrates from the physical to the online world, authenticating online entities is becoming a challenge since people lack the real-world cues upon which to base their authentication decisions. Unfortunately, current systems do not provide much support for online entity authentication. For example, how can a user be certain that a Facebook invitation is truly from the claimed individual, as anyone can trivially set up a bogus online identity with someone else’s photo? How can a user ensure that a piece of downloaded software is what (s)he searched for, as even security-conscious users are often frustrated by their inability to judge the legitimacy of software? Given an SSL certificate warning, how can a user validate it before proceeding, as the certificate could be legitimate (e.g., the certificate is signed by a legitimate authority that the browser does not recognize) or malicious (e.g., signed by a compromised CA)?

In this dissertation, we show that users can make authentication decisions regarding previously unknown entities after evaluating credible evidence of the entities’ proximity to trustworthy recommenders. Our authentication setting is broad: (1) entities can range from unknown parties issuing social invitations to unknown issuers of identity certificates and downloadable software programs, and (2) recommenders can be individuals and organizations that users can trust in everyday life, such as close friends, experts, crowds, and public institutions. We base our dissertation on two key observations. First, recommenders are accountable to and have a built-in bond of trust with users (i.e., social collateral), which they would lose if they act dishonestly. Social science research has shown that such loss deters misbehavior, and hence recommenders can be trusted to act honestly. Second, the strength of evidence offered to demonstrate the unknown entity’s proximity to a trusted recommender can establish entity accountability. The previously unknown entities become known and thus their dishonest behavior can be punished. Underlying this observation is the requirement that users can understand and evaluate proximity-based evidence, such as (1) social ties (e.g., frequency, recency, reciprocity of communications), (2) knowledge relations (e.g., reviews and ratings), (3) physical relations (e.g., physical locations and encounters), and (4) policy relations (e.g., policy of the organization/authority that users select to do business with).

Armed with these concepts, this dissertation explores different types of proximity evidence, including social, knowledge, physical, and policy evidence that can empower online users to hold unknown entities accountable for their actions, and shows how to present and evaluate evidence in a user-friendly manner. We introduce user-centric authentication systems for online identities, software, certificates, and public-key infrastructures. We confirm that providing robust, usable, and transparent proximity-based evidence empowers users to make context-dependent authentication decisions and build trust in previously unknown online entities.