c# - I need advice developing a sensitive data transfer/storage/encryption system -



c# - I need advice developing a sensitive data transfer/storage/encryption system -

intro

i'm working on project involves daily extraction of info (pharmacy records) visualfox pro database, , uploading of wordpress site, clients of pharmacy can securely view it. advice in terms of general methodology of software - able code it, need know if i'm going right way it. i'm writing both pc software (in c#/.net 4.5) , php wordpress plugin.

question 1: encryption

the current process encrypting info server-side plan utilize based on this article. summarised, advocates encrypting each separate user's info asymmetrically own public key, stored on server. private key decrypt info encrypted symmetrically using user's password, , stored. way, if database stolen, user's password hash needs broken, , process needs repeated every user's data.

the weakness, pointed out author himself, , main point of question, fact while user logged in, decrypted key stored in session storage. way article suggests deal limit time user logged in. thought improve solution store key in short-lived secure cookie (of course of study whole process happening on https). way, if attacker has command of user's computer , can read cookies, can keylog password , log in, no need steal database, while if attacker gains access server, cannot decrypt https traffic (or can they? i'm not sure.)

should utilize secure cookies or session storage temporarily store decrypted key?

question 2: storage

the sec thing still want work out how store info - more of efficiency problem. since every user has own key encryption, follows records every user must stored separately. don't know if should store "block" of info every user, containing encrypted json array of objects representing records, or whether should store records in table actual info structure, , encrypt each info field separately key.

i leaning towards storing info 1 block - seems me more efficient decrypt 1 big block of info @ time, perhaps several thousands separate fields. also, if stored info in proper structure, still wouldn't able utilize mysql's where, orderby etc, since info blobs.

should store info big block per user, or separated different fields?

question 3: transfer

i extract info dbf file, , create "diff", whereby compare current extracted info lastly day's data, , upload blocks of users have changed (i can't upload records, end storing users' info in blocks). include "delete" instructions users have been deleted. there hundreds of thousands records in database, totalling on 200mb, , size increases every day.

my current plan write info json file, gzip , upload server. question is, how do while ensuring security of data? naturally, upload happen on https, , have api password in place allow authorised uploads, main concern how protect info if server compromised. don't want attacker grab json file server while it's beingness processed. 1 thought had server send me list of public keys users, , perform encryption in software, before upload. seems me that's way of protecting data. encrypt whole json file, perhaps api key or special password, that's moot if attacker can access decrypted file it's beingness processed on server. solution?

should encrypt info individually client-side, or there way securely transfer server , encrypt there?

thanks in advance answers, i'd love hear who's dealt problems before.

note: cross-posted programmers, see comments.

question 1 encryption

as happens, working on similar scheme encrypt personal details (email, ip) in wordpress comments, if server compromised, sensitive info in database still encrypted. storing assymetric decryption key in session out me, since leave key on server attacker grab @ same time compromising it.

so, cookies on ssl cert improve way go - @ to the lowest degree attacker has wait user log in before can steal key(s). in tandem this, sort of tripwire scheme idea, users cannot log onto scheme (thus providing keys waiting attacker) 1 time compromised.

as say, encrypting records (either 1 key per design, or many keys per yours) means searching through records becomes process have move away database server, in turns means slower.

you may able mitigate against making trade-off between speed , security: fields can fuzzied , stored unencrypted. example, if want search patients located, (lat, long) address, apply random shift (say 3 miles on both axes in either direction) , store resulting coordinates in plain text. approximate count queries relating location can done without decryption.

mitigating against attacks on client computer

the above looks @ how mitigate against attacks against server, greatest risk, since have records stored there. rightly point out though, attacks on client machines concern, , if members of public security processes can assumed non-existent.

on basis strengthen single password (which given in entirety) passphrase client needs select 3 random letters (i.e. not given in entirety). defends elegantly against keyloggers in 2 ways: firstly drop-down menus used, harder eavesdrop, , if user uses keyboard shortcuts, have not supplied total phrase. @ each successful logon, index of random letters (e.g. 1, 4 , 5) recorded , not asked 1 time again long period. obviously, many wrong answers causes business relationship locked out , require reauthorisation via phone phone call or snail-mail reset code.

other authentication methods use: text user additional passphrase every time come in right password, or (probably prohibitively expensive) utilize authentication device per online banking.

store little/no identifying information

another tip security store little personal info possible. if can without ability reset passwords via email, name, address, telephone numbers , email - identifying info - perhaps unnecessary. personal info can stored separately on disconnected database on server, using mutual primary key link them together. (in fact if user wishes reset password, store flag against anonymous user record, , pharmacist can run reset process manually on firewalled machine when next visit admin panel).

question 2

should encrypt tabular info in 1 blob or leave in each column? i've looked @ 1 in application. me, stored in 1 blob, since use-case search-intensive, , having n decrypts per row rather 1 made decision easy. said, may prefer tidiness of encrypting columns individually, , 1 argue if corruption creeps in, separating them out gives improve chance of row survive.

if decide store in single blob, using format similar (rows separated newlines prior beingness asymmetrically encrypted):

1.2 <-- version of format, can add together things in future key1=value1 key2=value2 ...

if have several processes writing columns, create sure lock rows between read , write, otherwise (as hinted above) can lose of data.

as say, as json, if format improve you.

question 3

my understanding of question is: how replicate unencrypted offline re-create given cannot decrypt user records yourself? wonder here whether relax security constraints bit, , store mutual public key on server, , maintain separate record of changes encrypted mutual key. populate table should periodically emptied (by running sync routine on remote secure machine); thus, value of changes table attacker little compared obtaining whole database unencrypted.

the corresponding private key, of course, should on pharmacist's computer, 1 time again securely fire-walled internet.

the risk design attacker replaces server public key 1 of his/her own, can later collect info has been encrypted them! however, long you've installed trip-wire on server, can reasonably defended against: if triggered, dynamic part of web application won't write new changes (in fact won't work @ all) until scheme scanned , determined safe.

c# php mysql encryption security

Comments

Popular posts from this blog

web services - java.lang.NoClassDefFoundError: Could not initialize class net.sf.cglib.proxy.Enhancer -

Accessing MATLAB's unicode strings from C -

javascript - mongodb won't find my schema method in nested container -