-
Notifications
You must be signed in to change notification settings - Fork 9
/
invariants.bib
9128 lines (7744 loc) · 365 KB
/
invariants.bib
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
%%% invariants.bib -- Bibliography for program invariant detection
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Daikon
%%%
%%%
%%% Daikon extensions
%%%
@InProceedings{CsallnerS2006:ICSE,
author = "Christoph Csallner and Yannis Smaragdakis",
title = "Dynamically discovering likely interface invariants",
crossref = "ICSE2006",
pages = "861--864",
note = "Emerging results track",
usesDaikon = 1,
basefilename = "csallner-icse-2006",
downloads = "https://ranger.uta.edu/~csallner/papers/csallner06dynamically.pdf PDF",
abstract =
"Dynamic invariant detection is an approach that has received considerable
attention in the recent research literature. A natural question arises in
languages that separate the interface of a code module from its
implementation: does an inferred invariant describe the interface or the
implementation? Furthermore, if an implementation is allowed to refine
another, as, for instance, in object-oriented method overriding, what is
the relation between the inferred invariants of the overriding and the
overridden method? The problem is of great practical interest. Invariants
derived by real tools, like Daikon, often suffer from internal
inconsistencies when overriding is taken into account, becoming unsuitable
for some automated uses. We discuss the interactions between overriding and
inferred invariants, and describe the implementation of an invariant
inference tool that produces consistent invariants for interfaces and
overridden methods.",
}
@InProceedings{KuzminaG2006,
author = "Nadya Kuzmina and Ruben Gamboa",
title = "Dynamic constraint detection for polymorphic behavior",
crossref = "OOPSLACompanion2006",
pages = "657--658",
note = "Poster",
abstract =
"Dynamic invariant detection, the automatic recovery of partial program
specifications by inferring likely constraints from program executions, has
been successful in the context of procedural programs. The implementation
for dynamic invariant detection examines only the declared type of a
variable, lacking many details in the context of object-oriented
programs. This paper shows how this technique can be extended to detect
invariants of object-oriented programs in the presence of polymorphism by
examining the runtime type of a polymorphic variable, which may have
different declared and runtime types. We demonstrate the improved accuracy
of the dynamically detected specification on two real-world examples: the
Money example from the JUnit testing framework tutorial, and a database
query engine model example, which we adopted from a commercial database
application. Polymorphic constraints in both cases are shown to reveal the
specification of the runtime behavior of the systems.",
usesDaikon = 1,
downloads = "http://portal.acm.org/ft_gateway.cfm?id=1176659&type=pdf&coll=GUIDE&dl=&CFID=15151515&CFTOKEN=6184618 PDF",
OLDdownloads = "http://www.cs.uwyo.edu/~ruben/sod/nadya/resources/pub/runtime-refined-constraints-tr-uwcs-07-01.pdf extended version (PDF)",
supersededby = "KuzminaG2007:WODA",
}
@TechReport{KuzminaG2007:TR,
author = "Nadya Kuzmina and Ruben Gamboa",
title = "Extending dynamic constraint detection with polymorphic analysis",
institution = "University of Wyoming",
year = 2007,
number = "UWCS-07-01",
address = "Laramie, WY",
month = jan,
usesDaikon = 1,
supersededby = "KuzminaG2007:WODA",
}
@InProceedings{KuzminaG2007:WODA,
author = "Nadya Kuzmina and Ruben Gamboa",
title = "Extending dynamic constraint detection with polymorphic analysis",
crossref = "WODA2007",
pages = "57--63",
abstract =
"The general technique for dynamically detecting likely invariants, as
implemented by Daikon, lacks specific object-oriented support for
polymorphism. Daikon examines only the declared type of a variable which
prohibits it from examination of the runtime variables in the presence of
polymorphism. The approach presented in this paper extends the technique
to consider the runtime type of a polymorphic variable, which may have
different declared and runtime types. The runtime behavior of a polymorphic
variable is captured by \emph{polymorphic constraints} which have the form
of an implication with the name of the runtime class in the antecedent. We
demonstrate the improved accuracy of the dynamically detected specification
on the \texttt{Money} example from the JUnit testing framework
tutorial. Polymorphic constraints are shown to reveal the specification of
the runtime behavior of the example.",
usesDaikon = 1,
downloads = "https://ieeexplore.ieee.org/document/4273458/ IEEE Xplore",
OLDdownloads = "http://www.cs.uwyo.edu/~ruben/sod/nadya/resources/pub/runtime-refined-constraints.pdf PDF;
http://www.cs.uwyo.edu/~ruben/sod/nadya/resources/pub/runtime-refined-constraints-tr-uwcs-07-01.pdf extended version (PDF)",
}
@InProceedings{KuzminaPGC2008,
author = "Nadya Kuzmina and John Paul and Ruben Gamboa and James Caldwell",
title = "Extending dynamic constraint detection with disjunctive constraints",
crossref = "WODA2008",
pages = "57--63",
abstract =
"The languages of current dynamic constraint detection techniques are
often specified by fixed grammars of universal properties. These
properties may not be sufficient to express more subtle facts that
describe the essential behavior of a given program. In an effort to make
the dynamically recovered specification more expressive and
program-specific we propose the state space partitioning technique as a
solution which effectively adds program-specific disjunctive properties
to the language of dynamic constraint detection. In this paper we
present ContExt, a prototype implementation of the state space
partitioning technique which relies on Daikon for dynamic constraint
inference tasks.
\par
In order to evaluate recovered specifications produced by ContExt, we
develop a methodology which allows us to measure quantitatively how
well a particular recovered specification approximates the essential
specification of a program's behavior. The proposed methodology is then
used to comparatively evaluate the specifications recovered by ContExt
and Daikon on two examples.",
usesDaikon = 1,
downloads = "http://www.cs.uwyo.edu/~jlc/papers/woda_08.pdf PDF",
OLDdownloads = "http://www.cs.uwyo.edu/~ruben/sod/nadya/resources/pub/disj-constraints.pdf PDF",
ACMdownloads = "http://dl.acm.org/citation.cfm?id=1401839 PDF",
}
@TechReport{PolikarpovaCM2008,
author = "Nadia Polikarpova and Ilinca Ciupa and Bertrand Meyer",
title = "A comparative study of programmer-written and automatically inferred contracts",
institution = ETHZ,
year = "2008",
number = "608",
address = ETHZaddr,
month = sep,
abstract =
"Where do contracts --- specification elements embedded in executable code
--- come from? To produce them, should we rely on the programmers, on
automatic tools, or some combination?
\par
Recent work, in particular the Daikon system, has shown that it is possible
to infer some contracts automatically from program executions. The main
incentive has been an assumption that most programmers are reluctant to
invent the contracts themselves. The experience of contract-supporting
languages, notably Eiffel, disproves that assumption: programmers will
include contracts if given the right tools. That experience also shows,
however, that the resulting contracts are generally partial and
occasionally incorrect.
\par
Contract-inference tools provide the opportunity for studying objectively
the quality of programmer-written contracts, and for assessing of the
respective roles of humans and tools. Working on an industrial base of
contract-equipped software, we applied Daikon to infer contracts, and
compared the results with the already present contracts.
\par
We found that a contract inference tool can be used to strengthen
programmer-written contracts, but cannot infer all contracts that humans
write. The tool generates around five times as many relevant contract
elements (assertions) as written by programmers; but it only finds about a
half of those originally written by programmers.
\par
The study also uncovered strong correlations between the quality of
inferred contracts and some other interesting code metrics.",
usesDaikon = "1",
OPTbasefilename = "",
downloads = "ftp://ftp.inf.ethz.ch/pub/publications/tech-reports/6xx/609.pdf PDF",
supersededby = "PolikarpovaCM2009",
OPTcategory = "",
OPTsummary = "",
}
@InProceedings{PolikarpovaCM2009,
author = "Nadia Polikarpova and Ilinca Ciupa and Bertrand Meyer",
title = "A comparative study of programmer-written and automatically inferred contracts",
crossref = "ISSTA2009",
pages = "93--104",
abstract =
"Where do contracts --- specification elements embedded in executable code
--- come from? To produce them, should we rely on the programmers, on
automatic tools, or some combination?
\par
Recent work, in particular the Daikon system, has shown that it is possible
to infer some contracts automatically from program executions. The main
incentive has been an assumption that most programmers are reluctant to
invent the contracts themselves. The experience of contract-supporting
languages, notably Eiffel, disproves that assumption: programmers will
include contracts if given the right tools. That experience also shows,
however, that the resulting contracts are generally partial and
occasionally incorrect.
\par
Contract inference tools provide the opportunity for studying objectively
the quality of programmer-written contracts, and for assessing the
respective roles of humans and tools. Working on 25 classes taken from
different sources such as widely-used standard libraries and code written
by students, we applied Daikon to infer contracts and compared the results
(totaling more than 19500 inferred assertion clauses) with the already
present contracts.
\par
We found that a contract inference tool can be used to strengthen
programmer-written contracts, but cannot infer all contracts that humans
write. The tool generates around five times as many relevant assertion
clauses as written by programmers; but it only finds around 60\% of those
originally written by programmers. Around a third of the generated
assertions clauses are either incorrect or irrelevant. The study also
uncovered interesting correlations between the quality of inferred
contracts and some code metrics.",
usesDaikon = 1,
OPTbasefilename = "",
downloads = "https://se.inf.ethz.ch/~meyer/publications/testing/citadel-issta.pdf PDF",
OPTdownloadsnonlocal = "",
OPTcategory = "",
OPTsummary = "",
}
@incollection{PalomoDuarteGDMB2010,
year={2010},
isbn={978-3-642-13910-9},
booktitle={Web Engineering},
volume={6189},
series={Lecture Notes in Computer Science},
editor={Benatallah, Boualem and Casati, Fabio and Kappel, Gerti and Rossi, Gustavo},
doi={10.1007/978-3-642-13911-6_45},
title={Takuan: A Tool for {WS-BPEL} Composition Testing Using Dynamic Invariant Generation},
url={http://dx.doi.org/10.1007/978-3-642-13911-6_45},
publisher={Springer Berlin Heidelberg},
keywords={Web services; service composition; WS-BPEL; white-box testing; dynamic invariant generation},
author={Palomo-Duarte, Manuel and García-Domínguez, Antonio and Medina-Bulo, Inmaculada and Alvarez-Ayllón, Alejandro and Santacruz, Javier},
pages={531--534},
language={English}
}
@article{PalomoDuarte20145041,
title = "Automatic dynamic generation of likely invariants for WS-BPEL compositions ",
journal = "Expert Systems with Applications ",
volume = "41",
number = "11",
pages = "5041 - 5055",
year = "2014",
note = "",
issn = "0957-4174",
doi = "http://dx.doi.org/10.1016/j.eswa.2014.01.037",
url = "http://www.sciencedirect.com/science/article/pii/S095741741400061X",
author = "Manuel Palomo-Duarte and Antonio García-Domínguez and Inmaculada Medina-Bulo",
keywords = "Web service composition",
keywords = "WS-BPEL",
keywords = "Dynamic invariant generation",
keywords = "White-box testing ",
abstract = "Abstract The wide adoption of Web Services has led to the development of languages to compose them, like the WS-BPEL standard. In order to check whether the composition works as expected, one common approach is to analyze it and infer functional properties describing its behavior. Traditional approaches for inferring properties in WS-BPEL have been static: compositions are transformed into specialized analysis models based on some formalization. However, this formalization could be inexact due to theoretical limitations or differing interpretations of the standard by implementers. Dynamic invariant generation solves these problems by extracting the properties from actual executions and has been successfully used in popular languages, but not to WS-BPEL yet. In this work, we apply dynamic invariant generation to WS-BPEL, providing innovative solutions for several features that require special consideration, like highly multidimensional values in variables, an advanced type system or unstructured code. We have implemented these solutions in Takuan and evaluated its performance with several compositions of varying complexity. We present the results obtained and a comparative analysis of the efficiency and effectiveness of our solutions. Results show that the solutions are successful in reducing the cost of applying dynamic invariant generation and the number of uninteresting invariants generated. "
}
%%%
%%% Daikon applications
%%%
@InProceedings{RazKS2002,
author = "Orna Raz and Philip Koopman and Mary Shaw",
title = "Semantic Anomaly Detection in Online Data Sources",
crossref = "ICSE2002",
pages = "302--312",
usesDaikon = 1,
basefilename = "raz-icse-2002",
downloads = "http://www-2.cs.cmu.edu/~ornar/icse2002.pdf PDF;
http://www-2.cs.cmu.edu/~ornar/icse2002.ps PS;
http://www-2.cs.cmu.edu/~ornar/icse2002_presentation.pdf Slides",
abstract =
"Much of the software we use for everyday purposes incorporates elements
developed and maintained by someone other than the developer. These
elements include not only code and databases but also dynamic data feeds
from online data sources. Although everyday software is not mission
critical, it must be dependable enough for practical use. This is limited
by the dependability of the incorporated elements.It is particularly
difficult to evaluate the dependability of dynamic data feeds, because they
may be changed by their proprietors as they are used. Further, the
specifications of these data feeds are often even sketchier than the
specifications of software components.We demonstrate a method of inferring
invariants about the normal behavior of dynamic data feeds. We use these
invariants as proxies for specifications to perform on-going detection of
anomalies in the data feed. We show the feasibility of our approach and
demonstrate its usefulness for semantic anomaly detection: identifying
occasions when a dynamic data feed is delivering unreasonable values, even
though its behavior may be superficially acceptable (i.e., it is delivering
parsable results in a timely fashion).",
}
@InProceedings{HangalL02,
author = "Sudheendra Hangal and Monica S. Lam",
title = "Tracking down software bugs using automatic anomaly detection",
crossref = "ICSE2002",
pages = "291--301",
}
@InProceedings{HangalCNC2005,
author = "Sudheendra Hangal and Naveen Chandra and Sridhar
Narayanan and Sandeep Chakravorty",
title = "{IODINE}: A tool to automatically infer dynamic invariants
for hardware designs",
crossref = "DAC2005",
pages = "775--778",
}
@TechReport{GroceV2002,
author = "Alex Groce and Willem Visser",
title = "What went wrong: Explaining counterexamples",
institution = "RIACS, USRA",
year = 2002,
number = "02-08",
supersededby = "GroceV2003",
usesDaikon = 1,
basefilename = "groce-tr0208-2002",
downloads = "http://www-2.cs.cmu.edu/~agroce/RIACS0208.ps PS",
abstract =
"Model checking, initially successful in the field of hardware design, has
recently been applied to software. One of the chief advantages of model
checking is the production of counterexamples demonstrating that a system
does not satisfy a specification. However, it may require a great deal of
human effort to extract the essence of an error from even a detailed
source-level trace of a failing run. We use an automated method for
finding multiple versions of an error (and similar executions that do not
produce an error), and analyze these executions to produce a more succinct
description of the key elements of the error.",
}
@InProceedings{GroceV2003,
author = "Alex Groce and Willem Visser",
title = "What Went Wrong: Explaining Counterexamples",
crossref = "SPIN2003",
pages = "121--135",
usesDaikon = 1,
basefilename = "groce-spin-2003",
downloads = "https://spinroot.com/spin/Workshops/ws03/GroceVisser.pdf PDF",
abstract =
"One of the chief advantages of model checking is the production of
counterexamples demonstrating that a system does not satisfy a
specification. However, it may require a great deal of human effort to
extract the essence of an error from even a detailed source-level trace of
a failing run. We use an automated method for finding multiple versions of
an error (and similar executions that do not produce an error), and analyze
these executions to produce a more succinct description of the key elements
of the error. The description produced includes identification of portions
of the source code crucial to distinguishing failing and succeeding runs,
differences in invariants between failing and non-failing runs, and
information on the necessary changes in scheduling and environmental
actions needed to cause successful runs to fail."
}
@TechReport{XieN2003:TR,
author = "Tao Xie and David Notkin",
title = "Exploiting synergy between testing and inferred partial
specifications",
institution = UWCSE,
year = 2003,
number = "UW-CSE-03-04-02",
address = UWCSEaddr,
month = apr,
usesDaikon = 1,
supersededby = "XieN2003:WODA",
basefilename = "xie-tr030402-2003",
OLDdownloads = "https://www.cs.washington.edu/tr/2003/04/UW-CSE-03-04-02.pdf PDF",
}
@InProceedings{XieN2003:WODA,
author = "Tao Xie and David Notkin",
title = "Exploiting synergy between testing and inferred partial
specifications",
crossref = "WODA2003",
pages = "17--20",
usesDaikon = 1,
supersededby = "XieN2003:ASE",
basefilename = "xie-woda-2003",
downloads = "http://taoxie.cs.illinois.edu/publications/testspecsynergy-woda03.pdf PDF",
OLDdownloads = "http://people.engr.ncsu.edu/txie/testspecsynergy-woda03.ppt Slides",
}
@InProceedings{XieN2003:ASE,
author = "Tao Xie and David Notkin",
title = "Tool-assisted unit test selection based on operational
violations",
crossref = "ASE2003",
pages = "40--48",
abstract =
"Unit testing, a common step in software development, presents a
challenge. When produced manually, unit test suites are often insufficient
to identify defects. The main alternative is to use one of a variety of
automatic unit test generation tools: these are able to produce and execute
a large number of test inputs that extensively exercise the unit under
test. However, without a priori specifications, developers need to manually
verify the outputs of these test executions, which is generally
impractical. To reduce this cost, unit test selection techniques may be
used to help select a subset of automatically generated test inputs. Then
developers can verify their outputs, equip them with test oracles, and put
them into the existing test suite. In this paper, we present the
operational violation approach for unit test selection, a black-box
approach without requiring a priori specifications. The approach
dynamically generates operational abstractions from executions of the
existing unit test suite. Any automatically generated tests violating the
operational abstractions are identified as candidates for selection. In
addition, these operational abstractions can guide test generation tools to
produce better tests. To experiment with this approach, we integrated the
use of Daikon (a dynamic invariant detection tool) and Jtest (a commercial
Java unit testing tool). An experiment is conducted to assess this
approach.",
usesDaikon = 1,
supersededby = "XieN2006",
basefilename = "xie-ase-2003",
downloads = "https://taoxie.cs.illinois.edu/publications/ase03.pdf PDF;
https://taoxie.cs.illinois.edu/publications/unittestselect-ase03-slides.pdf PowerPoint",
}
@Article{XieN2006,
author = "Tao Xie and David Notkin",
title = "Tool-assisted unit test generation and selection based on
operational abstractions",
journal = ASEjournal,
year = 2006,
volume = 13,
number = 3,
pages = "345--371",
month = jul,
abstract =
"Unit testing, a common step in software development, presents a challenge.
When produced manually, unit test suites are often insufficient to
identify defects. The main alternative is to use one of a variety of
automatic unit-test generation tools: these are able to produce and
execute a large number of test inputs that extensively exercise the unit
under test. However, without a priori specifications, programmers need to
manually verify the outputs of these test executions, which is generally
impractical. To reduce this cost, unit-test selection techniques may be
used to help select a subset of automatically generated test inputs. Then
programmers can verify their outputs, equip them with test oracles, and put
them into the existing test suite. In this paper, we present the
operational violation approach for unit-test generation and selection, a
black-box approach without requiring a priori specifications. The
approach dynamically generates operational abstractions from executions of
the existing unit test suite. These operational abstractions guide test
generation tools to generate tests to violate them. The approach selects
those generated tests violating operational abstractions for
inspection. These selected tests exercise some new behavior that has not
been exercised by the existing tests. We implemented this approach by
integrating the use of Daikon (a dynamic invariant detection tool) and
Parasoft Jtest (a commercial Java unit testing tool), and conducted several
experiments to assess the approach.",
usesDaikon = 1,
basefilename = "xie-ase-2006",
downloads = "https://taoxie.cs.illinois.edu/publications/xiease03-journal.pdf PDF",
}
@InProceedings{XieN2003:FATES,
author = "Tao Xie and David Notkin",
title = "Mutually enhancing test generation and specification inference",
booktitle = "FATES 2003: 3rd International Workshop on Formal Approaches to Testing of Software",
pages = "63--72",
year = 2003,
address = Montreal,
month = oct # "~6,",
}
@Misc{Xie2003:JovExplanation,
author = "Tao Xie",
howpublished = "Personal communication",
month = aug,
year = 2003
}
@InProceedings{Xie2002,
author = "Tao Xie",
title = "Checking inside the black box: Regression fault exposure
and localization based on value spectra differences",
crossref = "FSESRF2002",
NEEDpages = "*",
usesDaikon = 1,
downloads = "https://people.engr.ncsu.edu/txie/TaoFSE-10studentposter.pdf PDF;
https://people.engr.ncsu.edu/txie/fseposter.ppt Slides",
supersededby = "XieN2002:TR",
}
@TechReport{XieN2002:TR,
author = "Tao Xie and David Notkin",
title = "Checking inside the black box: Regression fault exposure
and localization based on value spectra differences",
institution = UWCSE,
year = 2002,
number = "UW-CSE-02-12-04",
address = UWCSEaddr,
month = dec,
usesDaikon = 1,
basefilename = "xie-tr021204-2002",
OLDdownloads = "https://people.engr.ncsu.edu/txie/publications/UW-CSE-02-12-04.pdf PDF",
abstract =
"We present a new fault exposure and localization approach intended to
increase the effectiveness of regression testing. In particular, we extend
traditional regression testing, which strongly focuses on black box
comparisons, to compare internal program states. These value spectra
differences allow a more detailed comparison of executions of the new and
old versions of a program. In particular, our approach checks inside the
program black box to observe unit behaviors and further checks inside the
unit black box to observe some internal variable values besides the ones
propagated outside the unit. This approach exposes faults without requiring
the faults to be propagated to the outputs of the system or unit. Two
heuristics are proposed to locate regression faults based on a fault
propagation model. An experiment is conducted to assess their
effectiveness. The initial results show our value-spectra-comparison
approach can increase the regression fault exposure probability effectively
and identify the locations of regression faults accurately.",
supersededby = "XieN2004"
}
@InProceedings{XieN2004,
author = "Tao Xie and David Notkin",
title = "Checking inside the black box: Regression testing based
on value spectra differences",
crossref = "ICSM2004",
pages = "28--37",
usesDaikon = 1,
basefilename = "xie-iscm-2004",
downloads = "https://taoxie.cs.illinois.edu/publications/icsm04.pdf PDF;
https://taoxie.cs.illinois.edu/publications/valuespectra-icsm04-slides.pdf Slides",
abstract =
"Comparing behaviors of program versions has become an important task in
software maintenance and regression testing. Traditional regression testing
strongly focuses on black-box comparison of program outputs. Program
spectra have recently been proposed to characterize a program s behavior
inside the black box. Comparing program spectra of program versions offers
insights into the internal behavior differences between versions. In this
paper, we present a new class of program spectra, value spectra, which
enriches the existing program spectra family. We compare the value spectra
of an old version and a new version to detect internal behavior deviations
in the new version. We use a deviation-propagation call tree to present the
deviation details. Based on the deviation-propagation call tree, we propose
two heuristics to locate deviation roots, which are program locations that
trigger the behavior deviations. We have conducted an experiment on seven C
programs to evaluate our approach. The results show that our approach can
effectively expose program behavior differences between versions even when
their program outputs are the same, and our approach reports deviation
roots with high accuracy for most programs.",
}
@InProceedings{Gupta2003,
author = "Neelam Gupta",
title = "Generating test data for dynamically discovering likely
program invariants",
crossref = "WODA2003",
pages = "21--24",
supersededby = "GuptaH2003",
usesDaikon = 1,
basefilename = "gupta-woda-2003",
downloads = "http://www.cs.nmsu.edu/~jcook/woda2003/papers/Gupta.pdf PDF",
abstract =
"Dynamic detection of program invariants is emerging as an important
research area with many challenging problems. As with any dynamic approach,
the accuracy of dynamic discovery of likely program invariants depends on
the quality of test cases used to detect invariants. Therefore, generating
suitable test cases that support accurate detection of program invariants
is crucial to the dynamic approach for invariant detection.
\par
In this paper, we explore new directions in using the existing test data
generation techniques to improve the accuracy of dynamically detected
program invariants. First we discuss the augmentation of existing test
suites to improve the accuracy of dynamically discovered invariants. The
augmentation of the test suite may be done prior to running the dynamic
analysis if the variables and expressions whose values will be monitored at
runtime are known in advance. On the other hand, the dynamic analysis may
be run first using an available test suite to obtain an initial guess of
program invariants. These guessed invariants may then be used to generate
test cases to augment the test suite. We also propose the use of existing
test data generation techniques in improving the accuracy of invariants
guessed using an already available test suite.",
}
@InProceedings{GuptaH2003,
author = "Neelam Gupta and Zachary V. Heidepriem",
title = "A new structural coverage criterion for dynamic detection
of program invariants",
crossref = "ASE2003",
pages = "49--58",
usesDaikon = 1,
basefilename = "gupta-ase-2003",
NOdownloads = "1",
abstract =
"Dynamic detection of program invariants is emerging as an important
research area with many challenging problems. Generating suitable test
cases that support accurate detection of program invariants is crucial to
the dynamic approach for invariant detection. In this paper, we propose a
new structural coverage criterion called Invariant-coverage criterion for
dynamic detection of program invariants. We also show how the
invariant-coverage criterion can be used to improve the accuracy of
dynamically detected invariants. We first used the Daikon tool to report
likely program invariants using the branch coverage and all definition-use
pair coverage test suites for several programs. We then generated
invariant-coverage suites for these likely invariants. When Daikon was run
with the invariant-coverage suites, several spurious invariants reported
earlier by the branch coverage and definition-use pair coverage test suites
were removed from the reported invariants. Our approach also produced more
meaningful invariants than randomly generated test suites.",
}
@InProceedings{PytlikRKR2003,
author = "Brock Pytlik and Manos Renieris and Shriram Krishnamurthi
and Steven P. Reiss",
title = "Automated fault localization using potential invariants",
crossref = "AADEBUG2003",
pages = "273--276",
usesDaikon = 1,
basefilename = "pytlik-aadebug-2003",
downloads = "https://cs.brown.edu/~sk/Publications/Papers/Published/prkr-auto-fault-loc-inv/paper.pdf PDF",
abstract =
"We present a general method for fault localization based on abstracting
over program traces, and a tool that implements the method using Ernst's
notion of potential invariants. Our experiments so far have been
unsatisfactory, suggesting that further research is needed before
invariants can be used to locate faults.",
}
@TechReport{Pytlik2003,
author = "Brock Pytlik",
title = "Automatic debugging using potential invariants",
institution = "Brown University",
year = 2003,
type = "Honors Thesis",
address = "Providence, Rhode Island",
month = may,
usesDaikon = 1,
basefilename = "pytlik-thesis-2003",
downloads = "https://cs.brown.edu/research/pubs/theses/ugrad/2003/bpytlik.pdf PDF",
abstract = "This thesis does not contain an abstract."
}
@InProceedings{ZimmermannWDZ2004,
author = "Thomas Zimmermann and Peter Wei{\ss}gerber and Stephan Diehl and Andreas Zeller",
authorASCII = "Thomas Zimmermann and Peter Weissgerber and Stephan Diehl and Andreas Zeller",
authorASCII2 = "Thomas Zimmermann and Peter WeiBgerber and Stephan Diehl and Andreas Zeller",
title = "Mining version histories to guide software changes",
crossref = "ICSE2004",
pages = "563--572",
abstract =
"We apply data mining to version histories in order to guide programmers
along related changes: ``Programmers who changed these functions also
changed...''. Given a set of existing changes, such rules (a) suggest
and predict likely further changes, (b) show up item coupling that is
indetectable by program analysis, and (c) prevent errors due to
incomplete changes. After an initial change, our ROSE prototype can
correctly predict 26\% of further files to be changed---and 15\% of the
precise functions or variables. The topmost three suggestions contain
a correct location with a likelihood of 64\%."
}
@InProceedings{NSKL2005,
author = "Sri Hari Krishna Narayanan and Seung Woo Son and Mahmut Kandemir
and Feihui Li",
title = "Using loop invariants to fight soft errors in data caches",
booktitle = "Asia and South Pacific Design Automation Conference",
pages = "1317--1320",
year = 2005,
address = "Shanghai, China",
month = jan # "~18--21,",
usesDaikon = 1,
basefilename = "n-aspdac2005",
downloads = "https://www.mcs.anl.gov/~snarayan/publications/loopinvariants.pdf PDF",
abstract =
"Ever scaling process technology makes embedded systems more vulnerable to
soft errors than in the past. One of the generic methods used to fight soft
errors is based on duplicating instructions either in the spatial or
temporal domain and then comparing the results to see whether they are
different. This full duplication based scheme, though effective, is very
expensive in terms of performance, power, and memory space. In this paper,
we propose an alternate scheme based on loop invariants and present
experimental results which show that our approach catches 62\% of the
errors caught by full duplication, when averaged over all benchmarks
tested. In addition, it reduces the execution cycles and memory demand of
the full duplication strategy by 80\% and 4\%, respectively."
}
@InProceedings{CsallnerS2006:ISSTA,
author = "Christoph Csallner and Yannis Smaragdakis",
title = "{DSD}-{Crasher}: A hybrid analysis tool for bug finding",
crossref = "ISSTA2006",
pages = "245--254",
abstract =
"DSD-Crasher is a bug finding tool that follows a three-step approach
to program analysis:
\par
\textbf{D.} Capture the program's intended execution behavior with
dynamic invariant detection. The derived invariants exclude many
unwanted values from the program's input domain.
\par
\textbf{S.} Statically analyze the program within the restricted input
domain to explore many paths.
\par
\textbf{D.} Automatically generate test cases that focus on verifying
the results of the static analysis. Thereby confirmed results are never
false positives, as opposed to the high false positive rate inherent in
conservative static analysis.
\par
This three-step approach yields benefits compared to past two-step
combinations in the literature. In our evaluation with third-party
applications, we demonstrate higher precision over tools that lack a
dynamic step and higher efficiency over tools that lack a static
step.",
basefilename = "csallner-issta2006",
downloads = "https://ranger.uta.edu/~csallner/papers/csallner06dsd-crasher.pdf PDF;
https://ranger.uta.edu/~csallner/cnc/ Implementation",
usesDaikon = 1,
supersededby = "CsallnerSX2008",
}
@Article{CsallnerSX2008,
author = "Christoph Csallner and Yannis Smaragdakis and Tao Xie",
title = "{DSD-Crasher}: A hybrid analysis tool for bug finding",
journal = TOSEM,
year = 2008,
volume = 17,
number = 2,
pages = "8:1--8:37",
month = apr,
abstract =
"DSD-Crasher is a bug finding tool that follows a three-step approach to
program analysis:
\par
\textbf{D.} Capture the program's intended execution behavior with dynamic
invariant detection. The derived invariants exclude many unwanted values
from the program's input domain.
\par
\textbf{S.} Statically analyze the program within the restricted input
domain to explore many paths.
\par
\textbf{D.} Automatically generate test cases that focus on reproducing the
predictions of the static analysis. Thereby confirmed results are feasible.
\par
This three-step approach yields benefits compared to past two-step
combinations in the literature. In our evaluation with third-party
applications, we demonstrate higher precision over tools that lack a
dynamic step and higher efficiency over tools that lack a static step.",
usesDaikon = 1,
downloads = "https://taoxie.cs.illinois.edu/publications/tosem08-dsdcrasher.pdf PDF",
}
@Article{RenCR2006,
author = "Xiaoxia Ren and Ophelia C. Chesley and Barbara G. Ryder",
title = "Identifying failure causes in {Java} programs: An application of change impact analysis",
journal = TSE,
year = 2006,
volume = 32,
number = 9,
pages = "718--732",
month = sep,
downloads = "https://www.computer.org/10.1109/TSE.2006.90 DOI",
abstract =
"During program maintenance, a programmer may make changes that enhance
program functionality or fix bugs in code. Then, the programmer usually
will run unit/regression tests to prevent invalidation of previously tested
functionality. If a test fails unexpectedly, the programmer needs to
explore the edit to find the failure-inducing changes for that test. Crisp
uses results from Chianti, a tool that performs semantic change impact
analysis [1], to allow the programmer to examine those parts of the edit
that affect the failing test. Crisp then builds a compilable intermediate
version of the program by adding a programmer-selected partial edit to the
original code, augmenting the selection as necessary to ensure
compilation. The programmer can reexecute the test on the intermediate
version in order to locate the exact reasons for the failure by
concentrating on the specific changes that were applied. In nine initial
case studies on pairs of versions from two real Java programs, Daikon [2]
and Eclipse jdt compiler [3], we were able to use Crisp to identify the
failure-inducing changes for all but 1 of 68 failing tests. On average, 33
changes were found to affect each failing test (of the 67), but only 1-4 of
these changes were found to be actually failure-inducing.",
usesDaikonAsTestSubject = 1,
}
@InProceedings{ChesleyRR2005,
author = "Ophelia Chesley and Xiaoxia Ren and Barbara G. Ryder",
title = "Crisp: A debugging tool for {Java} programs",
crossref = "ICSM2005",
pages = "401--410",
usesDaikonAsTestSubject = 1,
downloads = "https://www.computer.org/10.1109/ICSM.2005.37 DOI",
abstract =
"Crisp is a tool (i.e., an Eclipse plug-in) for constructing intermediate
versions of a Java program that is being edited in an IDE such as
Eclipse. After a long editing session, a programmer usually will run
regression tests to make sure she has not invalidated previously checked
functionality. If a test fails unexpectedly, Crisp uses input from Chianti,
a tool for semantic change impact analysis [8], to allow the programmer to
select parts of the edit that affected the failing test and to add them to
the original program, creating an intermediate version guaranteed to
compile. Then the programmer can re-execute the test in order to locate the
exact reasons for the failure by concentrating on those affecting changes
that were applied. Using Crisp, a programmer can iteratively select, apply,
and undo individual (or sets of) affecting changes and, thus effectively
find a small set of failure-inducing changes.",
}
@InProceedings{DenmatGD2005,
author = "Tristan Denmat and Arnaud Gotlieb and Mireille Ducass{\'e}",
authorASCII = "Mireille Ducasse",
title = "Proving or disproving likely invariants with constraint reasoning",
booktitle = "15th Workshop on Logic-based Methods in Programming Environments (WLPE'05)",
NEEDpages = "*",
year = 2005,
address = "Sitges (Barcelona), Spain",
month = oct # "~5,",
downloads = "https://arxiv.org/pdf/cs/0508108.pdf PDF",
abstract =
"A program invariant is a property that holds for every execution of the
program. Recent work suggest to infer likely-only invariants, via dynamic
analysis. A likely invariant is a property that holds for some executions
but is not guaranteed to hold for all executions. In this paper, we present
work in progress addressing the challenging problem of automatically
verifying that likely invariants are actual invariants. We propose a
constraint-based reasoning approach that is able, unlike other approaches,
to both prove or disprove likely invariants. In the latter case, our
approach provides counter-examples. We illustrate the approach on a
motivating example where automatically generated likely invariants are
verified.",
usesDaikon = 1,
}
@TechReport{BowringHR2005,
author = "James F. Bowring and Mary Jean Harrold and James M. Rehg",
title = "Improving the classification of software behaviors using ensembles of control-flow and data-flow classifiers",
institution = "Georgia Institute of Technology College of Computing",
year = 2005,
number = "GIT-CERCS-05-10",
address = "Atlanta, GA, USA",
month = apr,
usesDaikon = "Kvasir",
downloads = "https://www.cercs.gatech.edu/www.cercs.gatech.edu/tech-reports/tr2005/git-cercs-05-10.pdf PDF",
abstract =
"One approach to the automatic classification of program behaviors is to
view these behaviors as the collection of all the program's
executions. Many features of these executions, such as branch profiles, can
be measured, and if these features accurately predict behavior, we can
build automatic behavior classifiers from them using statistical
machine-learning techniques. Two key problems in the development of useful
classifiers are (1) the costs of collecting and modeling data and (2) the
adaptation of classifiers to new or unknown behaviors. We address the first
problem by concentrating on the properties and costs of individual features
and the second problem by using the active-learning paradigm. In this
paper, we present our technique for modeling a data-flow feature as a
stochastic process exhibiting the Markov property. We introduce the novel
concept of databins to summarize, as Markov models, the transitions of
values for selected variables. We show by empirical studies that
databin-based classifiers are effective. We also describe ensembles of
classifiers and how they can leverage their components to improve
classification rates. We show by empirical studies that ensembles of
control-flow and data-flow based classifiers can be more effective than
either component classifier.",
}
@TechReport{EvansP2004,
author = "David Evans and Michael Peck",
title = "Simulating critical software engineering",
institution = "University of Virginia Computer Science Department",
year = 2004,
number = "CS-2004-03",
address = "Charlottesville, VA, USA",
month = feb,
usesDaikon = 1,
supersededby = "EvansP2006",
abstract =
"One goal of many introductory software engineering courses is to
simulate realistic software engineering. Unfortunately, many of
the practical constraints of typical courses are antithetical to that
goal: instead of working in large teams on large projects, dealing
with changing requirements and maintaining programs over many
years, courses generally involve students working alone or in
small teams with short projects than end the first time the program
works correctly on some selected input. Of course, it is
impossible (and undesirable) to carry out full industrial software
development within the context of a typical university course.
However, it is possible to simulate some aspects of safety critical
software engineering in an introductory software engineering
course. This paper describes an approach to teaching introductory
software engineering that focuses on using lightweight analysis
tools to aid in producing secure, robust and maintainable
programs. We describe how assignments were designed that
integrate analysis tools with typical software engineering material
and reports on results from an experiment measuring students
understanding of program invariants.",
usesDaikon = 1,
downloads = "http://www.cs.virginia.edu/~techrep/CS-2004-03.pdf PDF",
}
@InProceedings{EvansP2006,
author = "David Evans and Michael Peck",
title = "Inculcating invariants in introductory courses",
crossref = "ICSE2006",
pages = "673--678",
usesDaikon = 1,
downloads = "https://www.cs.virginia.edu/~evans/pubs/inculcating-packaged.pdf PDF",
abstract =
"One goal of introductory software engineering courses is to motivate
and instill good software engineering habits. Unfortunately, practical
constraints on typical courses often lead to student experiences that
are antithetical to that goal: instead of working in large teams and
dealing with changing requirements and maintaining programs over many
years, courses generally involve students working alone or in small
teams with short projects that end the first time the program works
correctly on some selected input. Small projects tend to reinforce poor
software engineering practices. Since the programs are small enough to
manage cognitively in ad hoc ways, effort spent more precisely
documenting assumptions seems wasteful. It is infeasible to carry out
full industrial software development within the context of a typical
university course. However, it is possible to simulate some aspects of
safety critical software engineering in an introductory software
engineering course. This paper describes an approach that focuses on
thinking about and precisely documenting invariants, and checking
invariants using lightweight analysis tools. We describe how assignments
were designed to emphasize the importance of invariants and to
incorporate program analysis tools with typical software engineering
material and report on results from an experiment measuring students
understanding of program invariants.",
}
@MastersThesis{Rad2005,
author = "Soroush Karimi Rad",
title = "Can structural test adequacy criteria be used to predict the quality of generated invariants?",
school = "University of Antwerp Department of Mathematics and Computer Science",
year = 2005,
address = "Antwerp",
NOmonthAVAILABLE = "*",
usesDaikon = 1,
abstract =
"The use of invariants during software development has many advantages.
Among other things, invariants can be used to formally specify
programs, to check certain vital properties at run-time, improve
communication between programmers and make understanding programs
easier. Despite these advantages, many programs lack explicit
invariants. Dynamic invariant generation can automatically derive
invariants from the program source but needs an initial test
suite. Moreover, the quality of the generated invariants depends on
this initial test suite. Many programs have test suites for detecting
errors which are designed according to structural coverage criteria. In
order to reuse these test suites for invariant generation, we must
investigate the ability of structural coverage criteria to produce good
quality invariants. An experiment was setup to investigate this