Loading...
Search for: database
0.008 seconds
Total 198 records

    Query Correctness Assurance for Outsourced Databases

    , M.Sc. Thesis Sharif University of Technology Noferesti, Morteza (Author) ; Jalili, Rasoul (Supervisor)
    Abstract
    In the secure data outsourcing scenario, verification of the reply of an unreliable server includes assessing the authenticity, completeness and it’s integrity. In this thesis, an efficient method, with emphasis on freshness, has been introduced to evaluate the correctness of the replies from a server. It takes in hand different application needs, inherent differences in the data, and different update mechanisms. This method evaluates freshness by using timestamps alongside the data being out sourced. Due to the requirement of verifying not only the freshness of the response, but the correctness of the timestamps as well, two general methods for evaluating and verifying the responses were... 

    A Trust-based Approach for Correctness Verification of Query Results in Data Outsourcing Scenario

    , M.Sc. Thesis Sharif University of Technology Ghasemi, Simin (Author) ; Jalili, Rasool (Supervisor)
    Abstract
    One of the security issues in database outsourcing scenario is the correctness of the query results. Correctness verification includes integrity, completeness and freshness of the results. Most of the proposed approaches for correctness verification impose high overhead on the components of the scenario which prevents the scenario to implemented in practical applications. In this thesis, we have proposed a probabilistic approach which imposes acceptable overhead for correctness verification of returned results of service provider. The approach uses the previous behavior of the service provider to calculate a trust value toward it which is used to adjust the imposed overhead. In other words,... 

    Improvement of Software Quality Attributes based on the Properties of Relational and Non-relational data Models

    , M.Sc. Thesis Sharif University of Technology Moghimi, Hamed (Author) ; Habibi, Jafar (Supervisor)
    Abstract
    With the growing use of computer systems, especially in form of web applications, software development and customer requirements has become more complex. Modern computer systems are supposed to achieve non-functional requirements in addition to functional requirements. Non-functional requirements are also called software quality attributes. One of the most important quality attributes for today's software systems is performance. Today’s softwares due to the growing number of their users, need more efficiency than ever to properly meet the performance request.Storage components and databases are the most important factors in causing performance bottlenecks in web applications. Most of... 

    Persian Compound Verb Database with the Verbal Element: “Shodan”

    , M.Sc. Thesis Sharif University of Technology Hashemnejad, Zeinab (Author) ; Khosravizadeh, Parvaneh (Supervisor) ; Shojaie, Razieh (Supervisor)
    Abstract
    Compound verbs (CVs) and its components, have been widely discussed in previous linguistics' researches as one of the most important and fundamental constructions of Persian language. Nevertheless most of the arguments and assumptions in those researches align with each other and studying CVs from another aspect with a different viewpoint has not been considered very much in order to solve nodes and issues in this area. We have endeavored in this thesis to review and criticize the portrayed definitions of the CVs, reconsider this construction and the syntactical and semantical roles of verbal and non-verbal elements in this combination from another point of view, and revise the previous... 

    An Approach for Secure Data Outsourcing

    , Ph.D. Dissertation Sharif University of Technology Hadavi, Mohammad Ali (Author) ; alili, Rasool (Supervisor)
    Abstract
    Data outsourcing is an approach to delegate the burden of data management to external servers. In spite of its clear advantages, data outsourcing requires security assurances including data confidentiality, query result correctness, and access control enforcement. Research proposals have identified solutions with disparate assumptions for different security requirements. It is a real obstacle towards having an integrated solution through the combination of existing approaches. The practicality of data outsourcing to the cloud is seriously affected by this challenge. In this thesis, a unified view based on secret sharing is proposed to simultaneously achieve confidentiality, correctness, and... 

    Secure untraceable off-line electronic cash system

    , Article Scientia Iranica ; Volume 20, Issue 3 , 2013 , Pages 637-646 ; 10263098 (ISSN) Baseri, Y ; Takhtaei, B ; Mohajeri, J ; Sharif University of Technology
    2013
    Abstract
    Eslami and Talebi (2011) [25] proposed an untraceable electronic cash scheme and claimed that their scheme protects the anonymity of customers, detects the identity of double spenders and provides the date attachability of coins to manage the bank database. In this paper, illustrating Eslami and Talebi's scheme, as one of the latest untraceable electronic cash schemes, and showing its weaknesses (in fulfilling the properties of perceptibility of double spender, unforgeability and date attainability of coins) and its faults (related to exchange protocol), we propose a new untraceable electronic cash scheme which is immune to the weaknesses of the former. Our scheme contains anonymity,... 

    ECG based human identification using wavelet distance measurement

    , Article Proceedings - 2011 4th International Conference on Biomedical Engineering and Informatics, BMEI 2011, 15 October 2011 through 17 October 2011 ; Volume 2 , October , 2011 , Pages 717-720 ; 9781424493524 (ISBN) Naraghi, M. E ; Shamsollahi, M. B ; Sharif University of Technology
    2011
    Abstract
    In this Paper a new approach is proposed for electrocardiogram (ECG) based human identification using wavelet distance measurement. The main advantage of this method is that it guarantees high accuracy even in abnormal cases. Furthermore, it possesses low sensitivity to noise. The algorithm was applied on 11 normal subjects and 10 abnormal subjects of MIT-BIH Database using single lead data, and a 100% human identification rate was on both normal and abnormal subjects. Adding artificial white noise to signals shows that the method is nearly accurate in SNR level above 5dB in normal subjects and 20dB in abnormal subjects  

    Improve the classification and sales management of products using multi-relational data mining

    , Article 2011 IEEE 3rd International Conference on Communication Software and Networks, ICCSN 2011, Xi'an, 27 May 2011 through 29 May 2011 ; 2011 , Pages 329-337 ; 9781612844855 (ISBN) Houshmand, M ; Alishahi, M ; Sharif University of Technology
    2011
    Abstract
    There are some elements such as competition among companies and changes in demands which result in changes of customers' behaviors. Therefore, paying no attention to these changes may lead to a reduction in company benefits and loss of customers. Since data and their analyses determine the activities and decision makings of companies, data quality is of paramount in analyzing them because misinformation leads to wrong decision making. Since data mining has been designed to find out multi repetition patterns, it can be used to improve the product sales violations by sales people and increase the quality of data. Most of data mining models available try to find patterns in one table, but the... 

    Utilizing intelligent segmentation in isolated word recognition using a hybrid HTD-HMM

    , Article International Conference on Circuits, Systems, Signal and Telecommunications - Proceedings, 21 October 2010 through 23 October 2010 ; October , 2011 , Pages 42-49 ; 9789604742714 (ISBN) Kazemi, R ; Sereshkeh, A. R ; Ehsandoust, B ; ; Sharif University of Technology
    2011
    Abstract
    Isolated Word Recognition (IWR) is becoming increasingly attractive due to the improvement of speech recognition techniques. However, the accuracy of IWR suffers when large databases or words with similar pronunciation are used. The criterion for accurate speech recognition is suitable segmentation. However, the traditional method of segmentation equal segmentation does not produce the most accurate result. Furthermore, utilizing manual segmentation based on events is not possible in large databases. In this paper, we introduce an intelligent segmentation based on Hierarchical Temporal Decomposition (HTD). Based on this method, a temporal decomposition (TD) algorithm can be used to... 

    A novel method to find appropriate ε for DBSCAN

    , Article Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 24 March 2010 through 26 March 2010 ; Volume 5990 LNAI, Issue PART 1 , 2010 , Pages 93-102 ; 03029743 (ISSN) ; 3642121446 (ISBN) Esmaelnejad, J ; Habibi, J ; Hassas Yeganeh, S ; Sharif University of Technology
    2010
    Abstract
    Clustering is one of the most useful methods of data mining, in which a set of real or abstract objects are categorized into clusters. The DBSCAN clustering method, one of the most famous density based clustering methods, categorizes points in dense areas into same clusters. In DBSCAN a point is said to be dense if the ε-radius circular area around it contains at least MinPts points. To find such dense areas, region queries are fired. Two points are defined as density connected if the distance between them is less than ε and at least one of them is dense. Finally, density connected parts of the data set extracted as clusters. The significant issue of such a method is that its parameters (ε... 

    Quantization-unaware double JPEG compression detection

    , Article Journal of Mathematical Imaging and Vision ; Volume 54, Issue 3 , 2016 , Pages 269-286 ; 09249907 (ISSN) Taimori, A ; Razzazi, F ; Behrad, A ; Ahmadi, A ; Babaie Zadeh, M ; Sharif University of Technology
    Springer New York LLC 
    Abstract
    The current double JPEG compression detection techniques identify whether or not an JPEG image file has undergone the compression twice, by knowing its embedded quantization table. This paper addresses another forensic scenario in which the quantization table of a JPEG file is not explicitly or reliably known, which may compel the forensic analyst to blindly reveal the recompression clues. To do this, we first statistically analyze the theory behind quantized alternating current (AC) modes in JPEG compression and show that the number of quantized AC modes required to detect double compression is a function of both the image’s block texture and the compression’s quality level in a fresh... 

    CytoGTA: a cytoscape plugin for identifying discriminative subnetwork markers using a game theoretic approach

    , Article PLoS ONE ; Volume 12, Issue 10 , 2017 ; 19326203 (ISSN) Farahmand, S ; Foroughmand Araabi, M. H ; Goliaei, S ; Razaghi Moghadam, Z ; Sharif University of Technology
    Abstract
    In recent years, analyzing genome-wide expression profiles to find genetic markers has received much attention as a challenging field of research aiming at unveiling biological mechanisms behind complex disorders. The identification of reliable and reproducible markers has lately been achieved by integrating genome-scale functional relationships and transcriptome datasets, and a number of algorithms have been developed to support this strategy. In this paper, we present a promising and easily applicable tool to accomplish this goal, namely CytoGTA, which is a Cytoscape plug-in that relies on an optimistic game theoretic approach (GTA) for identifying subnetwork markers. Given transcriptomic... 

    Characterizing the rate-memory tradeoff in cache networks within a factor of 2

    , Article IEEE Transactions on Information Theory ; 2018 ; 00189448 (ISSN) Yu, Q ; Maddah Ali, M. A ; Avestimehr, A. S ; Sharif University of Technology
    Institute of Electrical and Electronics Engineers Inc  2018
    Abstract
    We consider a basic caching system, where a single server with a database of N files (e.g. movies) is connected to a set of K users through a shared bottleneck link. Each user has a local cache memory with a size of M files. The system operates in two phases. a placement phase, where each cache memory is populated up to its size from the database, and a following delivery phase, where each user requests a file from the database, and the server is responsible for delivering the requested contents. The objective is to design the two phases to minimize the load (peak or average) of the bottleneck link. We characterize the rate-memory tradeoff of the above caching system within a factor of... 

    The exact rate-memory tradeoff for caching with uncoded prefetching

    , Article IEEE Transactions on Information Theory ; Volume 64, Issue 2 , 2018 , Pages 1281-1296 ; 00189448 (ISSN) Yu, Q ; Maddah Ali, M. A ; Avestimehr, A. S ; Sharif University of Technology
    Institute of Electrical and Electronics Engineers Inc  2018
    Abstract
    We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users' demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the minimum average rate... 

    Knowledge discovery using a new interpretable simulated annealing based fuzzy classification system

    , Article Proceedings - 2009 1st Asian Conference on Intelligent Information and Database Systems, ACIIDS 2009, 1 April 2009 through 3 April 2009, Dong Hoi ; 2009 , Pages 271-276 ; 9780769535807 (ISBN) Mohamadia, H ; Habibib, J ; Moavena, S ; Sharif University of Technology
    2009
    Abstract
    This paper presents a new interpretable fuzzy classification system. Simulated annealing heuristic is employed to effectively investigate the large search space usually associated with classification problem. Here, two criteria are used to evaluate the proposed method. The first criterion is accuracy of extracted fuzzy if-then rules, and the other is comprehensibility of obtained rules. Experiments are performed with some data sets from UCI machine learning repository. Results are compared with several well-known classification algorithms, and show that the proposed approach provides more accurate and interpretable classification system. © 2009 IEEE  

    A new category of relations: combinationally constrained relations

    , Article Scientia Iranica ; Volume 16, Issue 1 D , 2009 , Pages 34-52 ; 10263098 (ISSN) Rohani Rankoohi, M. T ; Mirian Hosseinabadi, H ; Sharif University of Technology
    2009
    Abstract
    The normalization theory in relational database design is a classical subject investigated in different papers. The results of these research works are the stronger normal forms such as 5NF, DKNF and 6NF. In these normal forms, there are less anomalies and redundancies, but it does not mean that these stronger normal forms are free of anomalies and redundancies. Each normal form discussion is based on a particular constraint. In this paper, we introduce relations which contain a new kind of constraint called "combinational constraint". We. distinguish two important kinds of this constraints, namely Strong and Weak. Also we classify the. Combinationally Constrained Relations as Single and... 

    Speech accent profiles: Modeling and synthesis

    , Article IEEE Signal Processing Magazine ; Volume 26, Issue 3 , 2009 , Pages 69-74 ; 10535888 (ISSN) Vaseghi, S ; Yan, Q ; Ghorshi, A ; Sharif University of Technology
    2009
    Abstract
    A discussion regarding speech accents will be given while describing a set of statistical signal processing methods for the modeling, analysis, synthesis, and morphing of English language accents. Accent morphing deals with the changing of the accent of a speech to a different accent. Accent itself is a distinctive pattern of pronunciation of speech within a community of people who belong to a national, geographic, or socioeconomic grouping. Then, the signal processing methodology for speech accent processing will be reviewed while the concept of an accent profile has been presented  

    Development of declustered processed earthquake accelerogram database for the Iranian Plateau: including near-field record categorization

    , Article Journal of Seismology ; Volume 23, Issue 4 , 2019 , Pages 869-888 ; 13834649 (ISSN) Khansefid, A ; Bakhshi, A ; Ansari, A ; Sharif University of Technology
    Springer Netherlands  2019
    Abstract
    In this paper, a comprehensive accelerogram database of the Iranian plateau containing 3585 data with all three components is gathered. The raw data are processed by the wavelet-based denoising method, and results are compared with the contaminated data. All the data are classified into mainshock and aftershock categories using the time and spatial window method. Afterward, the data are categorized into the pulse-like and non-pulse-like events based on the detection of velocity pulse in any of horizontal and/or vertical directions. Eventually, among 3585 data, the ones with an average shear wave velocity of top 30 m of subsurface soil profile are selected and their important ground motion... 

    Mental arousal level recognition competition on the shared database

    , Article 27th Iranian Conference on Electrical Engineering, ICEE 2019, 30 April 2019 through 2 May 2019 ; 2019 , Pages 1730-1736 ; 9781728115085 (ISBN) Saidi, M ; Rezania, S ; Khazaei, E ; Taghibeyglou, B ; Hashemi, S. S ; Kaveh, R ; Abootalebi, V ; Bagheri, S ; Homayounfar, M ; Asadi, M ; Mohammadian, A ; Mozafari, M ; Hasanzadeh, N ; DIni, H ; Sarvi, H. M ; Sharif University of Technology
    Institute of Electrical and Electronics Engineers Inc  2019
    Abstract
    This paper presents the results of the shared task with the aim of arousal level recognition for the competition held in conjunction with the 27th Iranian Conference on Electrical Engineering (ICEE 2019). A database annotated with arousal level labels released by Research Center for Development of Advanced Technologies. The contest was held on arousal database according to a defined protocol. Three teams were able to enter into the final stage of the competition according to compare their performance measure with the baseline method. The baseline method is proposed by the data owner. The aim of this paper is outlining the database, protocol design, and providing an overview of top-ranked... 

    Extended two-dimensional PCA for efficient face representation and recognition

    , Article 2008 IEEE 4th International Conference on Intelligent Computer Communication and Processing, ICCP 2008, Cluj-Napoca, 28 August 2008 through 30 August 2008 ; October , 2008 , Pages 295-298 ; 9781424426737 (ISBN) Safayani, M ; Manzuri Shalmani, M. T ; Khademi, M ; Sharif University of Technology
    2008
    Abstract
    In this paper a novel method called Extended Two-Dimensional PCA (E2DPCA) is proposed which is an extension to the original 2DPCA. We state that the covariance matrix of 2DPCA is equivalent to the average of the main diagonal of the covariance matrix of PCA. This implies that 2DPCA eliminates some covariance information that can be useful for recognition. E2DPCA instead of just using the main diagonal considers a radius of r diagonals around it and expands the averaging so as to include the covariance information within those diagonals. The parameter r unifies PCA and 2DPCA. r=1 produces the covariance of 2DPCA, r=n that of PCA. Hence, by controlling r it is possible to control the...