-
-
Notifications
You must be signed in to change notification settings - Fork 666
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help to run the tests in specific order #1165
Comments
hi @AdarshdeepCheema I recommend reading this section of the documentation: in short - Ginkgo does not guarantee order of specs. in fact it prefers to enable randomized spec orders to prevent spec pollution. splitting tests by feature or module is not a problem. requiring strict ordering of tests in the way you are proposing is a problem and can be somewhat hard to maintain. in your case you can place all these specs in a single Ordered Container, or you could set up a test suite that installs things to start, then runs a variety of tests agianst them, then cleans up at hte end. if you could provide more detail about the problem you are trying to solve, the system you are trying to test, and the constraints you are facing ("start up is slow" etc.) i'd be happy to help you think through how best to organize your specs. |
The example that I gave you above is just a prototype. In our case things are way too big in numbers and size. |
hey @AdarshdeepCheema - there are many examples in the k8s community of folks using ginkgo to accomplish what you are aiming to do - including the k8s e2e suite itself which is ~7000 specs that run in parallel in random order. i'm confident i can help you figure this out for your usecase and would like to encourage you to not reach for ordered tests as a solution. a common usecase that i've seen looks like this:
often, test suites do this in such a way that each spec is independent and works within a namespace such that the specs can be run in parallel. this allows a suite comprised of individually slow specs to run much more efficiently in parallel. i don't have enough detail to fully help you - the more you can provide the better. or if you have a repo you can point me to i can help. but, in general, you don't need to worry about having to set up each little step in order. for example imagine this pseudocode:
If any of those setup steps fails Ginkgo will cease execution, record a failure, and not run the test in question. you don't need to string the separate steps together as a set of ordered specs. |
Can we use Ordered Container and then run the specs from a specific file from it ? if possible how ??
Can you consider it as an enhancement that you can implement in future. Though it will againt the Ginkgo design, But user should have the choice to choose it and its consequences? |
hi - you can accomplish this today - simply define your code in a function in the external file and call that function in the It. I will not be adding first-class support for this to Ginkgo, however. I'd be happy to help you analyze and build out solutions for your problem space that better match Ginkgo's semantics and structure - as I've mentioned I'll need more detail (e.g. you can point me to an open source repo; or give me just one or two actual fleshed out examples so I can engage in the conversation more productively). I don't doubt we can figure something out. |
This is not an opensource project and we cant share any github links. I want to have 10-30 specs in each of the files that I mentioned above, Can I add for example i have created the below in FILE_A
|
i can reply in a bit more detail later but this section of the docs has some discussion of this: https://onsi.github.io/ginkgo/#dynamically-generating-specs your example will work but you will need to pass variables like I think it could be helpful to understand why defining these in a different file is an important design consideration (eg is it for reuse? so you can do the same thing in multiple different tests?) i'll have more later today but i'm on my phone and wanted to share at least that doc link |
I am getting error when tried to use above concept
https://onsi.github.io/ginkgo/#dynamically-generating-specs this is not what we are looking for. we do not want to reuse a spec, we want to keep them separated in different files. We want something like: File1 with Spec_1 to Spec_30 specs and File2 with Spec_31 to Spec_70 specs and so on til FIle_N with Spec_N to Spec_M Each file will generate some Workloads like pods, PVCs, replicasets,and CRDs via If we decide to bring everything up at once and only run all the tests later randomly. Then in this case because some workloads are dependent on others and we will need to set wait for 20-30 mins to allow enough time for all workloads to complete. We do not want to wait for 20-30 mins before initiating the tests. Suppose in File13 we want to verify the status of pvc_13, pods_13 or replicasets_13 that will not run or complete and keep on failing unless the things created in File1-file12 are completed without error. That is why we want to run the tests in a specific order and this is how we want to divide the tests. |
@AdarshdeepCheema I'd be happy to join a zoom call to have a face-to-face conversation about this if you would like. You can send me an e-mail at onsijoe@gmail.com and we can arrange a time - I am in Denver, so Mountain Time Zone. I have structural concerns with the approach you are trying to take and feel a face-to-face conversation would help me better surface that actual underlying problem you are trying to solve and show you how I might solve it with Ginkgo. |
In case there's interest, please take a look at the new proposal for managing shared resources and having finer-grained control over parallelism here: #1292 |
Need to carry out a simple CRUD test operation on k8s resource for eg; configmap. can i have 4 it blocks with separate op in each block? or do i need beforeeach aftereach functions to carry out the CRUD test |
test suite
test spec
when I run my test twice:
I get 2 orders:
I hope my test run in the following order:
we need to split the test case by feature or module. This makes it easier to maintain and add tests. so I'll ave multiple test files(01_install_test.go/02_read_test.go/03_isbn_test.go/04_uninstall_test.go). and I hope the tests runs as following order:
The text was updated successfully, but these errors were encountered: