Top Banner
Magister Teknologi Informasi Universitas Indonesia 2012 Data Mining
72
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Magister Teknologi InformasiUniversitas Indonesia2012Data Mining

  • More data is generated:Bank, telecom, other business transactions ...Scientific Data: astronomy, biology, etcWeb, text, and e-commerce More data is captured:Storage technology faster and cheaperDBMS capable of handling bigger DB

  • We have large data stored in one or more database/s.We starved to find new information within those data (for research usage, competitive edge, etc).We want to identify patterns or rules (trends and relationships) in those data.We know that a certain data exist inside a database, but what are the consequences of that datas existence?

  • There is often information hidden in the data that is not readily evidentHuman analysts may take weeks to discover useful informationMuch of the data is never analyzed at allThe Data GapTotal new disk (TB) since 1995Number of analysts

    disks

    UnitsCapacity PBs

    199589,054104.8

    1996105,686183.9

    1997129,281343.63

    1998143,649724.36

    1999165,8571394.6

    2000187,8352553.7

    2001212,8004641

    2002239,1388119

    2003268,22713027

    1995104.8

    1996183.9

    1997343.63

    1998724.36

    19991394.6

    20002553.7

    20014641

    20028119

    200313027

    disks

    0

    0

    0

    0

    0

    0

    0

    0

    0

    chart data gap

    26535105700

    27229227400

    27245425330

    27309891970

    259531727000

    chart data gap 2

    26535105700

    27229333100

    27245758430

    273091650400

    259533377400

    data gap

    Ph.D.PetabytesTerabytesTotal TBsPBs

    1995105.7105700105700105.7

    1996227.4227400333100333.1

    1997425.33425330758430758.43

    1998891.9789197016504001650.4

    19991727172700033774003377.4

    20005792579200091694009169.4

    1990199119921993199419951996199719981999

    Science and engineering Ph.D.s, total22,86824,02324,67525,44326,20526,53527,22927,24527,30925,953

    10570033310075843016504003377400

    10570033310075843016504003377400

    Sheet3

  • Data Mining is a process of extracting previously unknown, valid and actionable information from large databases and then using the information to make crucial business decisions (Cabena et al. 1998).Data mining: discovering interesting patterns from large amounts of data (Han and Kamber, 2001).

  • Definition from [Connolly, 2005]:The process of extracting valid, previously unknown, comprehensive, and actionable information from large databases and using it to make crucial business decisions.The thin red line of data mining: it is all about finding patterns or rules by extracting data from large databases in order to find new information that could lead to new knowledge.

  • What is Data Mining? Certain names are more prevalent in certain US locations (OBrien, ORurke, OReilly in Boston area) Group together similar documents returned by search engine according to their context (e.g. Amazon rainforest, Amazon.com,) What is not Data Mining? Look up phone number in phone directory Query a Web search engine for information about Amazon

  • Transformed DataTarget DataRawDataKnowledgeData MiningTransformationInterpretation& EvaluationSelection& CleaningIntegrationUnderstandingDATAWarehouseKnowledgeData Mining in Knowledge Discovery Process

  • ClusteringClassificationAssociation RulesOther Methods:Outlier detectionSequential patternsPredictionTrends and analysis of changesMethods for special data types, e.g., spatial data mining, web mining*

  • Association rules try to find association between items in a set of transactions.For example, in the case of association between items bought by customers in supermarket: 90% of transactions that purchase bread and butter also purchase milk

    Antecedent: bread and butterConsequent: milkConfidence factor: 90%

    *

  • A transaction is a set of items: T={ia, ib,it}T I, where I is the set of all possible items {i1, i2,in}D, the task relevant data, is a set of transactions (database of transactions).Example:items sold by supermarket (I:Itemset): {sugar, parsley, onion, tomato, salt, bread, olives, cheese, butter}Transaction by customer (T): T1: {sugar, onion, salt}Database (D): {T1={salt, bread, olives}, T2={sugar, onion, salt}, T3={bread}, T4={cheese, butter}, T5={tomato}, }*

  • An association rule is the form: P Q, where P I, Q I, and P Q =

    Example:{bread} {butter, cheese}{onion, tomato} {salt}*

  • Support of a rule P Q = Support of (P Q) in DsD(P Q ) = sD(P Q)percentage of transactions in D containing P and Q.#transactions containing P and Q divided by cardinality of D.

    Confidence of a rule P QcD(P Q) = sD(P Q)/sD(P)percentage of transactions that contain both P and Q in the subset of transactions that contain already P.*

  • Thresholds:minimum support: minsupminimum confidence: minconfFrequent itemset Psupport of P larger than minimum supportStrong rule P Q (c%)(P Q) frequent,c is larger than minimum confidence

    *

  • For rule {A} {C}:support = support({A, C}) = 50%confidence = support({A, C})/support({A}) = 66.6%For rule {C} {A}:support = support({A, C}) = 50%confidence = support({A, C})/support({C}) = 100.0%

    *Min. support 50%Min. confidence 50%

    Transaction IDItems Bought2000A, B, C1000A, C4000A, D5000B, E, F

    Frequent ItemsetSupport{A}75%{B}50%{C}50%{A,C}50%

  • InputA database of transactionsEach transaction is a list of items (Ex. purchased by a customer in a visit)Find all strong rules that associate the presence of one set of items with that of another set of items.Example: 98% of people who purchase tires and auto accessories also get automotive services doneThere are no restrictions on the number of items in the head or body of the rule.The most famous algorithm is APRIORI*

  • Find the frequent itemsets: the sets of items that have minimum support A subset of a frequent itemset must also be a frequent itemseti.e., if {AB} isa frequent itemset, both {A} and {B} should be a frequent itemset Iteratively find frequent itemsets with cardinality from 1 to k (k-itemset) Use the frequent itemsets to generate association rules.Source: [Sunysb, 2009]

  • Consider a database, D, consisting of 9 transactions.Suppose min. support count required is 2 (i.e. min_sup = 2/9 = 22%).Let minimum confidence required is 70%.We have to first find out the frequent itemset using apriori algorithm.Then, association rules will be generated using min. support & min. confidence

    TIDList of ItemsT100I1, I2, I5 T100I2, I4 T100I2, I3 T100I1, I2, I4 T100I1, I3T100I2, I3 T100I1, I3 T100I1, I2 ,I3, I5T100I1, I2, I3

  • Step 1: Generating 1-itemset Frequent Pattern

    Scan D for count of each candidateCompare candidate support count with minimum support countC1L1The set of frequent 1-itemsets, L1, consists of the candidate 1-itemsets satisfying minimum support.In the first iteration of the algorithm, each item is member of the set of candidate

    ItemsetSup.Count{l1}6{l2}7{l3}6{l4}2{;5}2

    ItemsetSup.Count{l1}6{l2}7{l3}6{l4}2{;5}2

  • Step 2: Generating 2-itemset Frequent PatternGenerate C2 candidates from L1Scan D for count of each candidateCompare candidate support count with minimum support countC2C2L2

    Itemset{l1,l2}{l1,l3}{l1,l4}{l1,l5}{l2,l3}{l2,l4}{l2,l5}{l3,l4}{l3,l5}{l4,l5}

    ItemsetSup.Count{l1,l2}4{l1,l3}4{l1,l4}1{l1,l5}2{l2,l3}4{l2,l4}2{l2,l5}2{l3,l4}0{l3,l5}1{l4,l5}0

    ItemsetSup.Count{l1,l2}4{l1,l3}4{l1,l5}2{l2,l3}4{l2,l4}2{l2,l5}2

  • To discover the set of frequent 2-itemsets, L2, the algorithm uses L1JoinL1 to generate a candidate set of 2-itemsets, C2.Next, the transactions in D are scanned and the support count for each candidate itemset in C2 is accumulated (as shown in the middle table).The set of frequent 2-itemsets, L2, is then determined, consisting of those candidate 2-itemsets in C2 having minimum support. Note:We havent used Apriori Property yet.

  • Step 3: Generating 3-itemset Frequent PatternGenerate C3 candidates from L2Scan D for count of each candidateCompare candidate support count with minimum support countC3C3L3

    Itemset{l1,l2,l3}{l1,l2,l5}

    ItemsetSup.Count{l1,l2,l3}2{l1,l2,l5}2

    ItemsetSup.Count{l1,l2,l3}2{l1,l2,l5}2

  • The generation of the set of candidate 3-itemsets, C3, involves use of the Apriori Property.In order to find C3, we compute L2JoinL2. C3= L2JoinL2 = {{I1, I2, I3}, {I1, I2, I5}, {I1, I3, I5}, {I2, I3, I4}, {I2, I3, I5}, {I2, I4, I5}}.

  • Based on the Apriori property that all subsets of a frequent itemset must also be frequent, we can determine that four latter candidates cannot possibly be frequent. For example, lets take {I1, I2, I3}. The 2-item subsets of it are {I1, I2}, {I1, I3} & {I2, I3}. Since all 2-item subsets of {I1, I2, I3} are members of L2, We will keep {I1, I2, I3} in C3. Lets take another example of {I2, I3, I5} which shows how the pruning is performed. The 2-item subsets are {I2, I3}, {I2, I5} & {I3,I5}.

  • BUT, {I3, I5} is not a member of L2 and hence it is not frequent violating Apriori Property. Thus We will have to remove {I2, I3, I5} from C3. Therefore, C3= {{I1, I2, I3}, {I1, I2, I5}} after checking for all members of result of Join operation for Pruning. Now, the transactions in D are scanned in order to determine L3, consisting of those candidates 3-itemsets in C having minimum support.

  • Step 4: Generating 4-itemset Frequent PatternThe algorithm uses L3JoinL3 to generate a candidate set of 4-itemsets, C4. Although the join results in {{I1, I2, I3, I5}}, this itemset is pruned since its subset {{I2, I3, I5}} is not frequent. Thus, C4= , and algorithm terminates, having found all of the frequent items. This completes our Apriori Algorithm.Whats Next ? These frequent itemsets will be used to generate strong association rules( where strong association rules satisfy both minimum support & minimum confidence).

  • Step 5: Generating Association Rules from Frequent ItemsetsProcedure:For each frequent itemset I, generate all nonempty subsets of I.For every nonempty subset s of I, output the rule s (I-s) if support_count(I) / support_count(s) min_conf where min_conf is minimum confidence threshold.

  • In our example:We had L = {{l1},{l2},{l3},{l4},{l5},{l1,l2},{l1,l3},{l1,l5},{l2,l3},{l2,l3},{l2,l5},{l1,l2,l3},{l1,l2,l5}}.Lets take I = {l1,l2,l5}Its all nonempty subsets are {l1,l2}, {l1,l5}, {l2,l5}, {l1}, {l2}, {l5}

  • Let minimum confidence thresholdis , say 70%.The resulting association rules are shown below, each listed with its confidence. R1: {I1,I2} {I5}Confidence = sc{I1,I2,I5}/sc{I1,I2} = 2/4 = 50% R1 is Rejected.R2: {I1,I5} {I2} Confidence = sc{I1,I2,I5}/sc{I1,I5} = 2/2 = 100%R2 is Selected. R3: {I2,I5} {I1}Confidence = sc{I1,I2,I5}/sc{I2,I5} = 2/2 = 100% R3 is Selected.

  • R4: {I1} {I2,I5} Confidence = sc{I1,I2,I5}/sc{I1} = 2/6 = 33%R4 is Rejected. R5: {I2} {I1,I5} Confidence = sc{I1,I2,I5}/{I2} = 2/7 = 29%R5 is Rejected. R6: {I5} {I1,I2}Confidence = sc{I1,I2,I5}/ {I5} = 2/2 = 100%R6 is Selected. In this way, We have found three strong association rules.

  • Learn a method for predicting the instance class from pre-labeled (classified) instancesMany approaches: Statistics, Decision Trees, Neural Networks, ...

  • Prepare a collection of records (training set )Each record contains a set of attributes, one of the attributes is the class.Find a model for class attribute as a function of the values of other attributes (decision tree, neural network, etc)Prepare test set to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.After happy with the accuracy, use your model to classify new instance*

  • *categoricalcategoricalcontinuousclassTraining SetLearn Classifier

    Tid

    Refund

    Marital

    Status

    Taxable

    Income

    Cheat

    1

    Yes

    Single

    125K

    No

    2

    No

    Married

    100K

    No

    3

    No

    Single

    70K

    No

    4

    Yes

    Married

    120K

    No

    5

    No

    Divorced

    95K

    Yes

    6

    No

    Married

    60K

    No

    7

    Yes

    Divorced

    220K

    No

    8

    No

    Single

    85K

    Yes

    9

    No

    Married

    75K

    No

    10

    No

    Single

    90K

    Yes

    10

    Refund

    Marital

    Status

    Taxable

    Income

    Cheat

    No

    Single

    75K

    ?

    Yes

    Married

    50K

    ?

    No

    Married

    150K

    ?

    Yes

    Divorced

    90K

    ?

    No

    Single

    40K

    ?

    No

    Married

    80K

    ?

    10

  • RefundMarStTaxIncYESNONONOYesNoMarried Single, Divorced< 80K> 80KSplitting AttributesTraining DataModel: Decision Tree

  • categoricalcategoricalcontinuousclassMarStRefundTaxIncYESNONOYesNoMarried Single, Divorced< 80K> 80KThere could be more than one tree that fits the same data!

    Tid

    Refund

    Marital

    Status

    Taxable

    Income

    Cheat

    1

    Yes

    Single

    125K

    No

    2

    No

    Married

    100K

    No

    3

    No

    Single

    70K

    No

    4

    Yes

    Married

    120K

    No

    5

    No

    Divorced

    95K

    Yes

    6

    No

    Married

    60K

    No

    7

    Yes

    Divorced

    220K

    No

    8

    No

    Single

    85K

    Yes

    9

    No

    Married

    75K

    No

    10

    No

    Single

    90K

    Yes

    10

  • Test DataStart from the root of tree.

    Refund

    Marital

    Status

    Taxable

    Income

    Cheat

    No

    Married

    80K

    ?

    10

  • Direct MarketingGoal: Reduce cost of mailing by targeting a set of consumers likely to buy a new cell-phone product.Approach:Use the data for a similar product introduced before. We know which customers decided to buy and which decided otherwise. This {buy, dont buy} decision forms the class attribute.Collect various demographic, lifestyle, and company-interaction related information about all such customers.Type of business, where they stay, how much they earn, etc.Use this information as input attributes to learn a classifier model.*From [Berry & Linoff] Data Mining Techniques, 1997

  • Fraud DetectionGoal: Predict fraudulent cases in credit card transactions.Approach:Use credit card transactions and the information on its account-holder as attributes.When does a customer buy, what does he buy, how often he pays on time, etcLabel past transactions as fraud or fair transactions. This forms the class attribute.Learn a model for the class of the transactions.Use this model to detect fraud by observing credit card transactions on an account.*

  • Customer Attrition/Churn:Goal: To predict whether a customer is likely to be lost to a competitor.Approach:Use detailed record of transactions with each of the past and present customers, to find attributes.How often the customer calls, where he calls, what time-of-the day he calls most, his financial status, marital status, etc. Label the customers as loyal or disloyal.Find a model for loyalty.*From [Berry & Linoff] Data Mining Techniques, 1997

  • Helps users understand the natural grouping or structure in a data set.Cluster: a collection of data objects that are similar to one another and thus can be treated collectively as one group.Clustering: unsupervised classification: no predefined classes*Clustering is a process of partitioning a set of data (or objects) in a set of meaningful sub-classes, called clusters.

  • Find natural grouping of instances given un-labeled data

  • A good clustering method will produce high quality clusters in which:the intra-class similarity (that is within a cluster) is high.the inter-class similarity (that is between clusters) is low.The quality of a clustering result also depends on both the similarity measure used by the method and its implementation.The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns.The quality of a clustering result also depends on the definition and representation of cluster chosen.*

  • Partitioning algorithms: Construct various partitions and then evaluate them by some criterion.Hierarchy algorithms: Create a hierarchical decomposition of the set of data (or objects) using some criterion. There is an agglomerative approach and a divisive approach.*

  • Partitioning method: Given a number k, partition a database D of n objects into a set of k clusters so that a chosen objective function is minimized (e.g., sum of distances to the center of the clusters).Global optimum: exhaustively enumerate all partitions too expensive!Heuristic methods based on iterative refinement of an initial partition

    *

  • Hierarchical decomposition of the data set (with respect to a given similarity measure) into a set of nested clustersResult represented by a so called dendrogramNodes in the dendrogram represent possible clusterscan be constructed bottom-up (agglomerative approach) or top down (divisive approach)

    *Clustering obtained by cutting the dendrogram at a desired level: each connected component forms a cluster.

  • cluster similarity = similarity of two most similar members- Potentially long and skinny clusters+ Fast

  • 12345

  • 12345

  • 12345

  • cluster similarity = similarity of two least similar members+ tight clusters- slow

  • 12345

  • 12345

  • 12345

  • Clustering obtained by cutting the dendrogram at a desired level: each connected component forms a cluster.Dendogram: Hierarchical Clustering*

  • UnderstandingtheDataData CleaningMissing Values, Noisy Values, OutliersDatesNominal/NumericDiscretizationNormalizationSmoothingTransformationAttribute selection*

  • Can'tbeexpectedtobeexpertinallfields,butunderstandingthedatacanbeextremelyusefulfordatamining.What data is available?What available data is actually relevant or useful?Can the data be enriched from other sources?Are there historical datasets available?Who is the Real expert to ask questions of?(Are the results at all sensible? Or are they completely obvious?)Answers to these questions before embarking on a data mining project are invaluable later on.*

  • Numberofinstancesavailable:5000 or more for reliable resultsNumber of attributes:Depends on data set, but any attribute less than 10 instances is typically not worth includingNumber of instances per class:More than 100 per classIf very unbalanced, consider stratified sampling*

  • Goal: maximizing data qualityAssess data qualityCorrect noted deficienciesAssess data qualityAnalyze data distribution: is there any strange data distribution? Analyze data elements: check inconsistencies, redundant, missing values, outlier, etc.Conduct physical audit: ensure data recorded properly, for example: cross check data to customer Analyze business rules: check data violates business rules

  • Exclude the attribute for which data is frequently missingExclude records that have missing dataExtrapolate missing values from other known valuesUse a predictive model to determine a valueFor quantitative values, use a generic figure, such as the average.

  • Wewantalldatestobethesame.YYYYMMDDisanISOstandard,BUTithassomeissuesfordatamining.Year 10,000 AD! (we only have 4 digits)Dates BC[E] eg -0300-02-03 is not a valid YYYY-MM-DD date.Most importantly: Does not preserve intervals, with/without the secondOther representations:Posix/UnixSystemDate:Numberofsecondssince1970etc*

  • Nominal data without ordering, eg: Sex, Country, etcSome algorithms can't deal with nominal or numeric attributes. Eg Decision trees deal best with nominal, but Neural Networks and many clustering algorithms require only numeric attribute values.In case the algorithm requires converting Nominal to NumericBinary field: One value is 0, other value is 1 (eg gender)Ordered fields: Convert to numbers to preserve order (eg A vs C grade becomes 4 and 2 respectively)Few Values: Convert each value into a new binary attribute, for example: possible values for attribute AT are A, B, C, D then you can create 4 new attributes ATa, ATb, ATc, ATd with each attribute has value either 0 or 1Many Values: Convert into groups of values, each with its own (binary) attribute. eg group states in the US into 5 groups of 10.Unique Values: Ignore identifier like attributes (buang atribut)*

  • Some algorithms require nominal or discrete values. How can we turn a numeric value into a nominal value, or a numeric value with a smaller range of values.Several Discretization techniques. Often called 'binning.Equal WidthEqual DepthClass DependentEntropyFuzzy (Allow some fuzziness as to the edges of the bin)Non-disjoint (Allow overlapping intervals)ChiMerge (Use Chi-Squared Test in the same way as Entropy)Iterative (Use some technique, then minimise classifier error)Lazy (Only discretize during classification (VERY lazy!))Proportional k-Interval (Number of bins = square root of instances)

    *

  • We might want to normalise our data such that two numeric values are comparable. For example to compare age and income.

    Decimal Scaling: v' = v/10k for smallest k such that max(abs(v'))

  • In the case of noisy data, we might want to smooth the data such that it is more uniform.

    Some possible techniques:Regression: Find the function for the data, and move each value some amount closer to what the function predicts (see classification)Clustering: Some clustering techniques remove outliers. We could cluster the data to remove these noisy values.Binning: We could use some technique to discretize the data, and then smooth based on those 'bins'.

  • Transform data to more meaningful formFor example:Birth date is transformed to AgeDate of the first transaction is transformed to number of days since the customer becomes member Grades of each course are transformed to cumulative GPA

  • Before getting to the data mining, we may want to either remove instances or select only a portion of the complete data set to work with.Why? Perhaps our algorithms don't scale well to the amount of data we haveTechniques:Records selectionPartitioning: Split the database into sections and work with each in turn. Often not appropriate unless the algorithm is designed to do it.Sampling: Select a random subset of the data and use that which is hopefully representative.Attribute selectionStepwise Forward Selection: Find the best attribute and add.Stepwise Backward Elimination: Find the worst attribute and remove.Genetic Algorithms: Use a 'survival of the fittest' along with random cross-breeding approachetc

  • What technique will you use to solve this problem?Given set of applicant attributes (name, salary, age, etc), you want to decide whether you have to approve customer application on credit card or not.Given national examination scores, you want to group Kabupatens into three educational level: Good, Average, PoorYou want to suggest to your customer about suitable pant given her/his choice of shirt.You want to estimate economic growth of Indonesia given some data (GNP, GDP, etc)

  • Pak Bedu adalah seorang dokter yang ingin menggunakan TI untuk membantunya memutuskan apakah pasiennya terkena kanker atau tidak. Untuk memutuskan hal tersebut, Pak Bedu sudah memiliki data setiap pasien yang meliputi hasil uji dalam 5 test laboratorium serta keputusan apakah dia terkena kanker atau tidak. Berikut ini contoh datanya:

    Masalah apa yang bisa Anda temukandi data Pak Bedu?

    IDT1T2T3T4T5Cancer?P113423YesP20511NoP312422NoP422312YesP522312No

  • Pada saat Pak Bedu memeriksa satu atribut (misal T1) ditemukan 5 atribut yang distinct dengan jumlah sebagai berikut:1 dengan jumlah 12342 dengan jumlah 20373 dengan jumlah 16594 dengan jumlah 190111 dengan jumlah 1Apa yang bisa Anda simpulkan dengan melihat data tersebut?

  • Pak Bedu mengembangkan usaha kliniknya di 3 tempat. Pak Bedu menyerahkan sepenuhnya pengelolaan data pasien ke setiap klinik. Pak Bedu ingin mengetahui karakteristik dari pasien untuk usahanya dengan mengumpulkan data dari ketiga kliniknya. Hanya saja Pak Bedu bingung karena setiap klinik memiliki skema data yang berbeda-beda. Apa yang harus Pak Bedu lakukan?Skema Klinik 1: Pasien( Nama, TglLahir, Tinggi (meter), Berat(kg), JenisKelamin (L/P), Alamat, Provinsi)Skema Klinik 2: Pasien( Nama, TglLahir, Tinggi (centimeter), Berat(kg), JenisKelamin (P/W), Alamat, Kota )Skema Klinik 3: Pasien( Nama, Umur, Tinggi (meter), Berat(kg), JenisKelamin (L/P), Kota, Provinsi)

  • Pak Bedu ternyata juga memiliki usaha Sekolah Tinggi Kesehatan. Sebagai orang yang baik hati, Pak Bedu ingin memberikan beasiswa untuk mahasiswanya yang sedang mengerjakan skripsi. Hanya saja Pak Bedu ingin memperoleh mahasiswa yang memiliki potensi untuk bisa menyelesaikan skripsinya dalam waktu satu semester. Pak Bedu memiliki data nilai mahasiswa yang sudah lulus beserta lama waktu penyelesaian studinya. Skema data yang dimiliki Pak Bedu antara lain: Tabel Mahasiswa(NPM, Nama, Umur, AsalDaerah, LamaSkripsi, LamaStudi), Tabel MataKuliah(Kode, Nama, Kelompok), Tabel Nilai(NPM, KodeMK, Nilai)Diskusikan apa yang kira-kira bisa Anda lakukan untuk membantu Pak Bedu

  • Pak Bedu mengembangkan kliniknya sampai 100 cabang. Pak Bedu ingin melihat pola-pola kunjungan pasien, daerah mana yang banyak orang sakit, kapan banyak yang sakit, dsb. Pak Bedu memiliki data tentang klinik dan jumlah total kunjungan pasien setiap bulannya.Apa yang bisa Anda lakukan?

    ****I will start by motivating the work next describe the Novel algorithm we developedand then describe the experimental evaluation and present our resultsI will start by motivating the work next describe the Novel algorithm we developedand then describe the experimental evaluation and present our results*